Test Report: Docker_Linux_crio 21790

                    
                      0500345ed58569c501f3381e2b1a5a0e0bac6bd7:2025-10-27:42095
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.27
35 TestAddons/parallel/Registry 15.03
36 TestAddons/parallel/RegistryCreds 0.46
37 TestAddons/parallel/Ingress 148.48
38 TestAddons/parallel/InspektorGadget 5.26
39 TestAddons/parallel/MetricsServer 5.33
41 TestAddons/parallel/CSI 41.78
42 TestAddons/parallel/Headlamp 2.6
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 15.16
45 TestAddons/parallel/NvidiaDevicePlugin 5.28
46 TestAddons/parallel/Yakd 6.26
47 TestAddons/parallel/AmdGpuDevicePlugin 5.28
97 TestFunctional/parallel/ServiceCmdConnect 602.91
114 TestFunctional/parallel/ServiceCmd/DeployApp 600.63
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.89
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.8
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.54
191 TestJSONOutput/pause/Command 2.23
197 TestJSONOutput/unpause/Command 1.87
263 TestPause/serial/Pause 7.14
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.26
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.22
312 TestStartStop/group/old-k8s-version/serial/Pause 6.61
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.47
321 TestStartStop/group/no-preload/serial/Pause 6.31
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.41
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.2
345 TestStartStop/group/embed-certs/serial/Pause 6.72
349 TestStartStop/group/newest-cni/serial/Pause 6.46
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.01
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable volcano --alsologtostderr -v=1: exit status 11 (267.719169ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:55:50.980574  495393 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:55:50.981129  495393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:55:50.981140  495393 out.go:374] Setting ErrFile to fd 2...
	I1027 21:55:50.981145  495393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:55:50.981351  495393 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:55:50.981645  495393 mustload.go:66] Loading cluster: addons-681393
	I1027 21:55:50.982047  495393 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:55:50.982068  495393 addons.go:606] checking whether the cluster is paused
	I1027 21:55:50.982153  495393 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:55:50.982167  495393 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:55:50.982544  495393 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:55:51.000630  495393 ssh_runner.go:195] Run: systemctl --version
	I1027 21:55:51.000692  495393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:55:51.017835  495393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:55:51.118154  495393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:55:51.118247  495393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:55:51.151713  495393 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:55:51.151745  495393 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:55:51.151754  495393 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:55:51.151757  495393 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:55:51.151760  495393 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:55:51.151763  495393 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:55:51.151765  495393 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:55:51.151768  495393 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:55:51.151770  495393 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:55:51.151776  495393 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:55:51.151779  495393 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:55:51.151781  495393 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:55:51.151784  495393 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:55:51.151786  495393 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:55:51.151789  495393 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:55:51.151793  495393 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:55:51.151795  495393 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:55:51.151798  495393 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:55:51.151800  495393 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:55:51.151803  495393 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:55:51.151805  495393 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:55:51.151807  495393 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:55:51.151810  495393 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:55:51.151812  495393 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:55:51.151815  495393 cri.go:89] found id: ""
	I1027 21:55:51.151858  495393 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:55:51.167924  495393 out.go:203] 
	W1027 21:55:51.169074  495393 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:55:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:55:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:55:51.169095  495393 out.go:285] * 
	* 
	W1027 21:55:51.172377  495393 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:55:51.173621  495393 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.580256ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-2tqh6" [6564a666-6603-4044-a2e5-b9e4e0700c5f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002560988s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-wx6pv" [99f13eb6-27b7-4b76-9ed8-62ee24257d3a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00351323s
addons_test.go:392: (dbg) Run:  kubectl --context addons-681393 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-681393 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-681393 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.494957707s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable registry --alsologtostderr -v=1: exit status 11 (288.603873ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:15.821589  498023 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:15.821908  498023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:15.821919  498023 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:15.821924  498023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:15.822173  498023 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:15.822514  498023 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:15.822966  498023 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:15.822990  498023 addons.go:606] checking whether the cluster is paused
	I1027 21:56:15.823096  498023 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:15.823110  498023 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:15.823566  498023 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:15.844736  498023 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:15.844794  498023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:15.865926  498023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:15.973700  498023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:15.973805  498023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:16.011817  498023 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:16.011842  498023 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:16.011847  498023 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:16.011852  498023 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:16.011856  498023 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:16.011860  498023 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:16.011864  498023 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:16.011867  498023 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:16.011871  498023 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:16.011879  498023 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:16.011883  498023 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:16.011911  498023 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:16.011915  498023 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:16.011918  498023 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:16.011922  498023 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:16.011927  498023 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:16.011931  498023 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:16.011936  498023 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:16.011961  498023 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:16.011965  498023 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:16.011969  498023 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:16.011973  498023 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:16.011976  498023 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:16.011980  498023 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:16.011984  498023 cri.go:89] found id: ""
	I1027 21:56:16.012036  498023 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:16.030363  498023 out.go:203] 
	W1027 21:56:16.031504  498023 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:16.031524  498023 out.go:285] * 
	* 
	W1027 21:56:16.034562  498023 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:16.035857  498023 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.03s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.46s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.980199ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-681393
addons_test.go:332: (dbg) Run:  kubectl --context addons-681393 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (279.19079ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:11.861505  497043 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:11.861684  497043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:11.861695  497043 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:11.861701  497043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:11.862010  497043 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:11.862399  497043 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:11.862972  497043 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:11.862995  497043 addons.go:606] checking whether the cluster is paused
	I1027 21:56:11.863128  497043 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:11.863145  497043 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:11.863728  497043 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:11.883416  497043 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:11.883469  497043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:11.903849  497043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:12.012596  497043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:12.012710  497043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:12.050556  497043 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:12.050593  497043 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:12.050600  497043 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:12.050604  497043 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:12.050608  497043 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:12.050614  497043 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:12.050618  497043 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:12.050622  497043 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:12.050626  497043 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:12.050638  497043 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:12.050644  497043 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:12.050647  497043 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:12.050650  497043 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:12.050653  497043 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:12.050655  497043 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:12.050672  497043 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:12.050680  497043 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:12.050684  497043 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:12.050687  497043 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:12.050689  497043 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:12.050694  497043 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:12.050696  497043 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:12.050699  497043 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:12.050701  497043 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:12.050703  497043 cri.go:89] found id: ""
	I1027 21:56:12.050752  497043 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:12.064651  497043 out.go:203] 
	W1027 21:56:12.065464  497043 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:12.065488  497043 out.go:285] * 
	* 
	W1027 21:56:12.069129  497043 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:12.070016  497043 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.46s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-681393 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-681393 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-681393 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [fb6c8f4f-c0c2-4f8a-b6bc-0cc22e1ff3de] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [fb6c8f4f-c0c2-4f8a-b6bc-0cc22e1ff3de] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004260099s
I1027 21:56:22.030723  485668 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.710970727s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-681393 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-681393
helpers_test.go:243: (dbg) docker inspect addons-681393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf",
	        "Created": "2025-10-27T21:54:04.757367764Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487715,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T21:54:04.799404683Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf/hosts",
	        "LogPath": "/var/lib/docker/containers/e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf/e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf-json.log",
	        "Name": "/addons-681393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-681393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-681393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf",
	                "LowerDir": "/var/lib/docker/overlay2/1f4986ab18921d5246d6778dac952103499ca88f791dcc633d95e9290302ca5f-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f4986ab18921d5246d6778dac952103499ca88f791dcc633d95e9290302ca5f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f4986ab18921d5246d6778dac952103499ca88f791dcc633d95e9290302ca5f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f4986ab18921d5246d6778dac952103499ca88f791dcc633d95e9290302ca5f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-681393",
	                "Source": "/var/lib/docker/volumes/addons-681393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-681393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-681393",
	                "name.minikube.sigs.k8s.io": "addons-681393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c02ce2472d6257ac9f1957ac5281b69604aa81edb772640a048ad5ed15e6200",
	            "SandboxKey": "/var/run/docker/netns/3c02ce2472d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-681393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:42:9b:53:13:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1f9929fc55781ac7dc66eb58190d50c60f897b144595a3fb0395ed718c198aa9",
	                    "EndpointID": "e5b32d746f785464947254505a9da99c8daf04ffacc6aff6d1d32a23c1c533e4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-681393",
	                        "e928af592dea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-681393 -n addons-681393
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-681393 logs -n 25: (1.249999725s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-240698 --alsologtostderr --binary-mirror http://127.0.0.1:35931 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-240698 │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	│ delete  │ -p binary-mirror-240698                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-240698 │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ addons  │ disable dashboard -p addons-681393                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	│ addons  │ enable dashboard -p addons-681393                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	│ start   │ -p addons-681393 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:55 UTC │
	│ addons  │ addons-681393 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:55 UTC │                     │
	│ addons  │ addons-681393 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ enable headlamp -p addons-681393 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ addons-681393 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ addons-681393 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ addons-681393 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ addons-681393 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ addons-681393 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-681393                                                                                                                                                                                                                                                                                                                                                                                           │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │ 27 Oct 25 21:56 UTC │
	│ addons  │ addons-681393 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ ip      │ addons-681393 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │ 27 Oct 25 21:56 UTC │
	│ addons  │ addons-681393 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ addons-681393 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ addons-681393 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ ssh     │ addons-681393 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ ssh     │ addons-681393 ssh cat /opt/local-path-provisioner/pvc-0c92e78d-b0c3-4e9d-862a-de825b3f6cd6_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │ 27 Oct 25 21:56 UTC │
	│ addons  │ addons-681393 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ addons-681393 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ addons-681393 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ ip      │ addons-681393 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-681393        │ jenkins │ v1.37.0 │ 27 Oct 25 21:58 UTC │ 27 Oct 25 21:58 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 21:53:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 21:53:44.196138  487076 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:53:44.196413  487076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:53:44.196423  487076 out.go:374] Setting ErrFile to fd 2...
	I1027 21:53:44.196428  487076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:53:44.196697  487076 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:53:44.197331  487076 out.go:368] Setting JSON to false
	I1027 21:53:44.198586  487076 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5763,"bootTime":1761596261,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 21:53:44.198715  487076 start.go:143] virtualization: kvm guest
	I1027 21:53:44.200288  487076 out.go:179] * [addons-681393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 21:53:44.201585  487076 notify.go:221] Checking for updates...
	I1027 21:53:44.201592  487076 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 21:53:44.202558  487076 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 21:53:44.203530  487076 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 21:53:44.204426  487076 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 21:53:44.205329  487076 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 21:53:44.206250  487076 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 21:53:44.207356  487076 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 21:53:44.230412  487076 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 21:53:44.230499  487076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 21:53:44.287750  487076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-27 21:53:44.278034178 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 21:53:44.287861  487076 docker.go:318] overlay module found
	I1027 21:53:44.289313  487076 out.go:179] * Using the docker driver based on user configuration
	I1027 21:53:44.290208  487076 start.go:307] selected driver: docker
	I1027 21:53:44.290225  487076 start.go:928] validating driver "docker" against <nil>
	I1027 21:53:44.290248  487076 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 21:53:44.290815  487076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 21:53:44.351289  487076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-27 21:53:44.340980418 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 21:53:44.351459  487076 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 21:53:44.351673  487076 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 21:53:44.352915  487076 out.go:179] * Using Docker driver with root privileges
	I1027 21:53:44.353821  487076 cni.go:84] Creating CNI manager for ""
	I1027 21:53:44.353892  487076 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 21:53:44.353904  487076 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 21:53:44.354005  487076 start.go:351] cluster config:
	{Name:addons-681393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-681393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1027 21:53:44.355071  487076 out.go:179] * Starting "addons-681393" primary control-plane node in "addons-681393" cluster
	I1027 21:53:44.355972  487076 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 21:53:44.356858  487076 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 21:53:44.357681  487076 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 21:53:44.357711  487076 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 21:53:44.357717  487076 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 21:53:44.357817  487076 cache.go:59] Caching tarball of preloaded images
	I1027 21:53:44.357913  487076 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 21:53:44.357924  487076 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 21:53:44.358285  487076 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/config.json ...
	I1027 21:53:44.358314  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/config.json: {Name:mkeb388ab1ce30b216f0956f96929fe834e2e844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:53:44.373641  487076 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 21:53:44.373748  487076 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 21:53:44.373765  487076 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 21:53:44.373769  487076 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 21:53:44.373780  487076 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 21:53:44.373787  487076 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1027 21:53:57.410864  487076 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1027 21:53:57.410907  487076 cache.go:233] Successfully downloaded all kic artifacts
	I1027 21:53:57.410957  487076 start.go:360] acquireMachinesLock for addons-681393: {Name:mka31f444ade0febfee0aa58b30475f233a1624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 21:53:57.411091  487076 start.go:364] duration metric: took 104.073µs to acquireMachinesLock for "addons-681393"
	I1027 21:53:57.411135  487076 start.go:93] Provisioning new machine with config: &{Name:addons-681393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-681393 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 21:53:57.411201  487076 start.go:125] createHost starting for "" (driver="docker")
	I1027 21:53:57.412647  487076 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1027 21:53:57.412901  487076 start.go:159] libmachine.API.Create for "addons-681393" (driver="docker")
	I1027 21:53:57.412936  487076 client.go:173] LocalClient.Create starting
	I1027 21:53:57.413053  487076 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 21:53:57.547980  487076 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 21:53:57.602736  487076 cli_runner.go:164] Run: docker network inspect addons-681393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 21:53:57.620075  487076 cli_runner.go:211] docker network inspect addons-681393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 21:53:57.620152  487076 network_create.go:284] running [docker network inspect addons-681393] to gather additional debugging logs...
	I1027 21:53:57.620176  487076 cli_runner.go:164] Run: docker network inspect addons-681393
	W1027 21:53:57.635848  487076 cli_runner.go:211] docker network inspect addons-681393 returned with exit code 1
	I1027 21:53:57.635879  487076 network_create.go:287] error running [docker network inspect addons-681393]: docker network inspect addons-681393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-681393 not found
	I1027 21:53:57.635905  487076 network_create.go:289] output of [docker network inspect addons-681393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-681393 not found
	
	** /stderr **
	I1027 21:53:57.636036  487076 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 21:53:57.651582  487076 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001616cc0}
	I1027 21:53:57.651622  487076 network_create.go:124] attempt to create docker network addons-681393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1027 21:53:57.651681  487076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-681393 addons-681393
	I1027 21:53:57.709008  487076 network_create.go:108] docker network addons-681393 192.168.49.0/24 created
	I1027 21:53:57.709041  487076 kic.go:121] calculated static IP "192.168.49.2" for the "addons-681393" container
	I1027 21:53:57.709212  487076 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 21:53:57.724843  487076 cli_runner.go:164] Run: docker volume create addons-681393 --label name.minikube.sigs.k8s.io=addons-681393 --label created_by.minikube.sigs.k8s.io=true
	I1027 21:53:57.742196  487076 oci.go:103] Successfully created a docker volume addons-681393
	I1027 21:53:57.742304  487076 cli_runner.go:164] Run: docker run --rm --name addons-681393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-681393 --entrypoint /usr/bin/test -v addons-681393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 21:54:00.277070  487076 cli_runner.go:217] Completed: docker run --rm --name addons-681393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-681393 --entrypoint /usr/bin/test -v addons-681393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.534718282s)
	I1027 21:54:00.277102  487076 oci.go:107] Successfully prepared a docker volume addons-681393
	I1027 21:54:00.277153  487076 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 21:54:00.277179  487076 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 21:54:00.277250  487076 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-681393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 21:54:04.683664  487076 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-681393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.406360153s)
	I1027 21:54:04.683700  487076 kic.go:203] duration metric: took 4.406516454s to extract preloaded images to volume ...
	W1027 21:54:04.683806  487076 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 21:54:04.683874  487076 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 21:54:04.683927  487076 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 21:54:04.741110  487076 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-681393 --name addons-681393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-681393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-681393 --network addons-681393 --ip 192.168.49.2 --volume addons-681393:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 21:54:05.033395  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Running}}
	I1027 21:54:05.052211  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:05.070829  487076 cli_runner.go:164] Run: docker exec addons-681393 stat /var/lib/dpkg/alternatives/iptables
	I1027 21:54:05.121723  487076 oci.go:144] the created container "addons-681393" has a running status.
	I1027 21:54:05.121791  487076 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa...
	I1027 21:54:05.441384  487076 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 21:54:05.467223  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:05.486829  487076 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 21:54:05.486849  487076 kic_runner.go:114] Args: [docker exec --privileged addons-681393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 21:54:05.530990  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:05.548580  487076 machine.go:94] provisionDockerMachine start ...
	I1027 21:54:05.548695  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:05.567072  487076 main.go:143] libmachine: Using SSH client type: native
	I1027 21:54:05.567417  487076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1027 21:54:05.567433  487076 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 21:54:05.708166  487076 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-681393
	
	I1027 21:54:05.708208  487076 ubuntu.go:182] provisioning hostname "addons-681393"
	I1027 21:54:05.708317  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:05.726628  487076 main.go:143] libmachine: Using SSH client type: native
	I1027 21:54:05.726852  487076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1027 21:54:05.726866  487076 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-681393 && echo "addons-681393" | sudo tee /etc/hostname
	I1027 21:54:05.876717  487076 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-681393
	
	I1027 21:54:05.876798  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:05.893775  487076 main.go:143] libmachine: Using SSH client type: native
	I1027 21:54:05.894032  487076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1027 21:54:05.894050  487076 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-681393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-681393/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-681393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 21:54:06.034846  487076 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 21:54:06.034879  487076 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 21:54:06.034910  487076 ubuntu.go:190] setting up certificates
	I1027 21:54:06.034936  487076 provision.go:84] configureAuth start
	I1027 21:54:06.035005  487076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-681393
	I1027 21:54:06.053178  487076 provision.go:143] copyHostCerts
	I1027 21:54:06.053279  487076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 21:54:06.053445  487076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 21:54:06.053572  487076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 21:54:06.053665  487076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.addons-681393 san=[127.0.0.1 192.168.49.2 addons-681393 localhost minikube]
	I1027 21:54:06.495624  487076 provision.go:177] copyRemoteCerts
	I1027 21:54:06.495693  487076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 21:54:06.495746  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:06.513370  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:06.614696  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 21:54:06.635122  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 21:54:06.654062  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 21:54:06.672926  487076 provision.go:87] duration metric: took 637.959139ms to configureAuth
	I1027 21:54:06.672980  487076 ubuntu.go:206] setting minikube options for container-runtime
	I1027 21:54:06.673183  487076 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:54:06.673300  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:06.691147  487076 main.go:143] libmachine: Using SSH client type: native
	I1027 21:54:06.691379  487076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1027 21:54:06.691404  487076 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 21:54:06.942756  487076 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 21:54:06.942786  487076 machine.go:97] duration metric: took 1.394182586s to provisionDockerMachine
	I1027 21:54:06.942801  487076 client.go:176] duration metric: took 9.529840427s to LocalClient.Create
	I1027 21:54:06.942821  487076 start.go:167] duration metric: took 9.529921339s to libmachine.API.Create "addons-681393"
	I1027 21:54:06.942831  487076 start.go:293] postStartSetup for "addons-681393" (driver="docker")
	I1027 21:54:06.942844  487076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 21:54:06.942920  487076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 21:54:06.943000  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:06.960764  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:07.062936  487076 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 21:54:07.066417  487076 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 21:54:07.066446  487076 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 21:54:07.066459  487076 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 21:54:07.066529  487076 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 21:54:07.066557  487076 start.go:296] duration metric: took 123.719178ms for postStartSetup
	I1027 21:54:07.066849  487076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-681393
	I1027 21:54:07.085290  487076 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/config.json ...
	I1027 21:54:07.085554  487076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 21:54:07.085597  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:07.102039  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:07.199332  487076 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 21:54:07.204131  487076 start.go:128] duration metric: took 9.79291352s to createHost
	I1027 21:54:07.204156  487076 start.go:83] releasing machines lock for "addons-681393", held for 9.793051774s
	I1027 21:54:07.204224  487076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-681393
	I1027 21:54:07.220843  487076 ssh_runner.go:195] Run: cat /version.json
	I1027 21:54:07.220887  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:07.220935  487076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 21:54:07.221028  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:07.238116  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:07.238553  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:07.387851  487076 ssh_runner.go:195] Run: systemctl --version
	I1027 21:54:07.394478  487076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 21:54:07.429677  487076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 21:54:07.434513  487076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 21:54:07.434571  487076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 21:54:07.460129  487076 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 21:54:07.460160  487076 start.go:496] detecting cgroup driver to use...
	I1027 21:54:07.460199  487076 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 21:54:07.460257  487076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 21:54:07.476354  487076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 21:54:07.488734  487076 docker.go:218] disabling cri-docker service (if available) ...
	I1027 21:54:07.488796  487076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 21:54:07.504774  487076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 21:54:07.522817  487076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 21:54:07.604458  487076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 21:54:07.691886  487076 docker.go:234] disabling docker service ...
	I1027 21:54:07.691977  487076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 21:54:07.711186  487076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 21:54:07.723987  487076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 21:54:07.801751  487076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 21:54:07.884336  487076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 21:54:07.896841  487076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 21:54:07.910746  487076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 21:54:07.910812  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.920816  487076 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 21:54:07.920880  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.929810  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.938445  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.947250  487076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 21:54:07.955237  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.963729  487076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.976935  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.985425  487076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 21:54:07.992454  487076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 21:54:07.999570  487076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 21:54:08.076718  487076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 21:54:08.181400  487076 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 21:54:08.181478  487076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 21:54:08.185925  487076 start.go:564] Will wait 60s for crictl version
	I1027 21:54:08.186001  487076 ssh_runner.go:195] Run: which crictl
	I1027 21:54:08.189580  487076 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 21:54:08.215607  487076 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 21:54:08.215689  487076 ssh_runner.go:195] Run: crio --version
	I1027 21:54:08.243925  487076 ssh_runner.go:195] Run: crio --version
	I1027 21:54:08.273531  487076 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 21:54:08.274718  487076 cli_runner.go:164] Run: docker network inspect addons-681393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 21:54:08.292050  487076 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1027 21:54:08.296345  487076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 21:54:08.306680  487076 kubeadm.go:884] updating cluster {Name:addons-681393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-681393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 21:54:08.306792  487076 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 21:54:08.306837  487076 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 21:54:08.339004  487076 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 21:54:08.339028  487076 crio.go:433] Images already preloaded, skipping extraction
	I1027 21:54:08.339082  487076 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 21:54:08.366590  487076 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 21:54:08.366617  487076 cache_images.go:86] Images are preloaded, skipping loading
	I1027 21:54:08.366625  487076 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1027 21:54:08.366736  487076 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-681393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-681393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 21:54:08.366803  487076 ssh_runner.go:195] Run: crio config
	I1027 21:54:08.413816  487076 cni.go:84] Creating CNI manager for ""
	I1027 21:54:08.413837  487076 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 21:54:08.413857  487076 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 21:54:08.413882  487076 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-681393 NodeName:addons-681393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 21:54:08.414020  487076 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-681393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 21:54:08.414099  487076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 21:54:08.422593  487076 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 21:54:08.422687  487076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 21:54:08.430523  487076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1027 21:54:08.442938  487076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 21:54:08.457924  487076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1027 21:54:08.470799  487076 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1027 21:54:08.474559  487076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 21:54:08.484283  487076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 21:54:08.559619  487076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 21:54:08.580605  487076 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393 for IP: 192.168.49.2
	I1027 21:54:08.580631  487076 certs.go:195] generating shared ca certs ...
	I1027 21:54:08.580647  487076 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:08.580798  487076 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 21:54:08.762609  487076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt ...
	I1027 21:54:08.762642  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt: {Name:mk6bcc704cee40f583b2e9c7ae9ea195abf7214d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:08.762839  487076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key ...
	I1027 21:54:08.762850  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key: {Name:mk7a7b8deca77163260202e72f732a394a4db049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:08.762927  487076 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 21:54:09.146118  487076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt ...
	I1027 21:54:09.146155  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt: {Name:mk4b436ce6a95536f63be1ea5da174a20f4ac530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.146342  487076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key ...
	I1027 21:54:09.146353  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key: {Name:mk4847455581689491a6bf7b9ac6f36470c32a26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.146425  487076 certs.go:257] generating profile certs ...
	I1027 21:54:09.146502  487076 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.key
	I1027 21:54:09.146517  487076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt with IP's: []
	I1027 21:54:09.619928  487076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt ...
	I1027 21:54:09.619969  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: {Name:mka12e803ffb734b9b8fbd52c50d7f8ff1b3b48a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.620195  487076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.key ...
	I1027 21:54:09.620212  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.key: {Name:mk262048845205be7a32e300b1501d8a59098073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.620324  487076 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key.57447788
	I1027 21:54:09.620355  487076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt.57447788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1027 21:54:09.842107  487076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt.57447788 ...
	I1027 21:54:09.842143  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt.57447788: {Name:mka2591bb91c57245fbeb03b480901a5062a0ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.842373  487076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key.57447788 ...
	I1027 21:54:09.842393  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key.57447788: {Name:mk08dc4390b209ab64acba576028fc77cb955e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.842505  487076 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt.57447788 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt
	I1027 21:54:09.842613  487076 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key.57447788 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key
	I1027 21:54:09.842686  487076 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.key
	I1027 21:54:09.842723  487076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.crt with IP's: []
	I1027 21:54:09.983297  487076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.crt ...
	I1027 21:54:09.983329  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.crt: {Name:mkdb2c2420ba72ff809e68c2c013664c4764445c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.983545  487076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.key ...
	I1027 21:54:09.983566  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.key: {Name:mk4507bac571aa59f6c90fe6f0a21dd5e9ccdb08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.983816  487076 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 21:54:09.983861  487076 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 21:54:09.983898  487076 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 21:54:09.983929  487076 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 21:54:09.984647  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 21:54:10.003724  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 21:54:10.022067  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 21:54:10.040346  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 21:54:10.058132  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 21:54:10.076152  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 21:54:10.094505  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 21:54:10.112335  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 21:54:10.129913  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 21:54:10.149341  487076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 21:54:10.162528  487076 ssh_runner.go:195] Run: openssl version
	I1027 21:54:10.169245  487076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 21:54:10.179968  487076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 21:54:10.184117  487076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 21:54:10.184201  487076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 21:54:10.222598  487076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 21:54:10.231733  487076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 21:54:10.235415  487076 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 21:54:10.235460  487076 kubeadm.go:401] StartCluster: {Name:addons-681393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-681393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 21:54:10.235559  487076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:54:10.235629  487076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:54:10.264489  487076 cri.go:89] found id: ""
	I1027 21:54:10.264550  487076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 21:54:10.273177  487076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 21:54:10.281356  487076 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 21:54:10.281419  487076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 21:54:10.289299  487076 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 21:54:10.289316  487076 kubeadm.go:158] found existing configuration files:
	
	I1027 21:54:10.289354  487076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 21:54:10.296990  487076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 21:54:10.297041  487076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 21:54:10.304337  487076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 21:54:10.311839  487076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 21:54:10.311902  487076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 21:54:10.319098  487076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 21:54:10.326854  487076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 21:54:10.326913  487076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 21:54:10.334712  487076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 21:54:10.342519  487076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 21:54:10.342589  487076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 21:54:10.350034  487076 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 21:54:10.390857  487076 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 21:54:10.390937  487076 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 21:54:10.412642  487076 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 21:54:10.412725  487076 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 21:54:10.412765  487076 kubeadm.go:319] OS: Linux
	I1027 21:54:10.412825  487076 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 21:54:10.412885  487076 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 21:54:10.412938  487076 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 21:54:10.413009  487076 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 21:54:10.413065  487076 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 21:54:10.413129  487076 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 21:54:10.413186  487076 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 21:54:10.413251  487076 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 21:54:10.476095  487076 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 21:54:10.476238  487076 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 21:54:10.476368  487076 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 21:54:10.485278  487076 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 21:54:10.487060  487076 out.go:252]   - Generating certificates and keys ...
	I1027 21:54:10.487171  487076 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 21:54:10.487283  487076 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 21:54:10.707292  487076 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 21:54:11.105894  487076 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 21:54:11.947310  487076 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 21:54:12.446295  487076 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 21:54:12.584355  487076 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 21:54:12.584483  487076 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-681393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 21:54:12.734689  487076 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 21:54:12.734886  487076 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-681393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 21:54:12.966708  487076 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 21:54:13.527144  487076 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 21:54:13.667071  487076 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 21:54:13.667177  487076 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 21:54:14.367159  487076 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 21:54:14.579565  487076 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 21:54:14.762325  487076 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 21:54:14.967994  487076 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 21:54:15.054403  487076 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 21:54:15.054898  487076 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 21:54:15.058599  487076 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 21:54:15.061878  487076 out.go:252]   - Booting up control plane ...
	I1027 21:54:15.062017  487076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 21:54:15.062117  487076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 21:54:15.062224  487076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 21:54:15.075075  487076 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 21:54:15.075183  487076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 21:54:15.081890  487076 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 21:54:15.082120  487076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 21:54:15.082179  487076 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 21:54:15.177211  487076 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 21:54:15.177336  487076 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 21:54:15.679093  487076 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.980126ms
	I1027 21:54:15.682886  487076 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 21:54:15.683029  487076 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1027 21:54:15.683173  487076 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 21:54:15.683291  487076 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 21:54:17.675020  487076 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.991947425s
	I1027 21:54:17.783162  487076 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.100155054s
	I1027 21:54:19.185532  487076 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.50257414s
	I1027 21:54:19.197521  487076 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 21:54:19.209258  487076 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 21:54:19.219808  487076 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 21:54:19.220135  487076 kubeadm.go:319] [mark-control-plane] Marking the node addons-681393 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 21:54:19.229138  487076 kubeadm.go:319] [bootstrap-token] Using token: ztjz0y.5i3bg84f6s7j3keq
	I1027 21:54:19.230540  487076 out.go:252]   - Configuring RBAC rules ...
	I1027 21:54:19.230698  487076 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 21:54:19.234720  487076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 21:54:19.240750  487076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 21:54:19.243534  487076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 21:54:19.247436  487076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 21:54:19.250113  487076 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 21:54:19.592659  487076 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 21:54:20.007211  487076 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 21:54:20.591373  487076 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 21:54:20.594292  487076 kubeadm.go:319] 
	I1027 21:54:20.594403  487076 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 21:54:20.594415  487076 kubeadm.go:319] 
	I1027 21:54:20.594523  487076 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 21:54:20.594533  487076 kubeadm.go:319] 
	I1027 21:54:20.594582  487076 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 21:54:20.594688  487076 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 21:54:20.594762  487076 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 21:54:20.594780  487076 kubeadm.go:319] 
	I1027 21:54:20.594852  487076 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 21:54:20.594862  487076 kubeadm.go:319] 
	I1027 21:54:20.594936  487076 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 21:54:20.594978  487076 kubeadm.go:319] 
	I1027 21:54:20.595069  487076 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 21:54:20.595177  487076 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 21:54:20.595277  487076 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 21:54:20.595287  487076 kubeadm.go:319] 
	I1027 21:54:20.595388  487076 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 21:54:20.595474  487076 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 21:54:20.595487  487076 kubeadm.go:319] 
	I1027 21:54:20.595587  487076 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ztjz0y.5i3bg84f6s7j3keq \
	I1027 21:54:20.595708  487076 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d \
	I1027 21:54:20.595756  487076 kubeadm.go:319] 	--control-plane 
	I1027 21:54:20.595784  487076 kubeadm.go:319] 
	I1027 21:54:20.595906  487076 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 21:54:20.595916  487076 kubeadm.go:319] 
	I1027 21:54:20.596038  487076 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ztjz0y.5i3bg84f6s7j3keq \
	I1027 21:54:20.596188  487076 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d 
	I1027 21:54:20.598618  487076 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 21:54:20.598728  487076 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 21:54:20.598759  487076 cni.go:84] Creating CNI manager for ""
	I1027 21:54:20.598770  487076 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 21:54:20.600166  487076 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 21:54:20.601063  487076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 21:54:20.605501  487076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 21:54:20.605518  487076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 21:54:20.619118  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 21:54:20.834581  487076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 21:54:20.834666  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:20.834711  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-681393 minikube.k8s.io/updated_at=2025_10_27T21_54_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=addons-681393 minikube.k8s.io/primary=true
	I1027 21:54:20.930240  487076 ops.go:34] apiserver oom_adj: -16
	I1027 21:54:20.930257  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:21.431033  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:21.930779  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:22.431204  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:22.931299  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:23.430479  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:23.930413  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:24.430491  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:24.931047  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:25.430924  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:25.930836  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:25.998100  487076 kubeadm.go:1114] duration metric: took 5.163502606s to wait for elevateKubeSystemPrivileges
	I1027 21:54:25.998141  487076 kubeadm.go:403] duration metric: took 15.762683617s to StartCluster
	I1027 21:54:25.998167  487076 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:25.998290  487076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 21:54:25.998867  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:25.999147  487076 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 21:54:25.999192  487076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 21:54:25.999210  487076 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 21:54:25.999473  487076 addons.go:69] Setting yakd=true in profile "addons-681393"
	I1027 21:54:25.999516  487076 addons.go:238] Setting addon yakd=true in "addons-681393"
	I1027 21:54:25.999555  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:25.999574  487076 addons.go:69] Setting inspektor-gadget=true in profile "addons-681393"
	I1027 21:54:25.999616  487076 addons.go:238] Setting addon inspektor-gadget=true in "addons-681393"
	I1027 21:54:25.999670  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:25.999741  487076 addons.go:69] Setting metrics-server=true in profile "addons-681393"
	I1027 21:54:25.999768  487076 addons.go:238] Setting addon metrics-server=true in "addons-681393"
	I1027 21:54:25.999796  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:25.999990  487076 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-681393"
	I1027 21:54:25.999998  487076 addons.go:69] Setting storage-provisioner=true in profile "addons-681393"
	I1027 21:54:26.000026  487076 addons.go:238] Setting addon storage-provisioner=true in "addons-681393"
	I1027 21:54:26.000065  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.000079  487076 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-681393"
	I1027 21:54:26.000102  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.000183  487076 addons.go:69] Setting gcp-auth=true in profile "addons-681393"
	I1027 21:54:26.000215  487076 mustload.go:66] Loading cluster: addons-681393
	I1027 21:54:26.000276  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.000357  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.000744  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.001237  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.001730  487076 addons.go:69] Setting default-storageclass=true in profile "addons-681393"
	I1027 21:54:26.001761  487076 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-681393"
	I1027 21:54:26.002080  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.002291  487076 out.go:179] * Verifying Kubernetes components...
	I1027 21:54:26.002511  487076 addons.go:69] Setting volcano=true in profile "addons-681393"
	I1027 21:54:26.002532  487076 addons.go:238] Setting addon volcano=true in "addons-681393"
	I1027 21:54:26.002563  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.002638  487076 addons.go:69] Setting ingress=true in profile "addons-681393"
	I1027 21:54:26.002651  487076 addons.go:238] Setting addon ingress=true in "addons-681393"
	I1027 21:54:26.002682  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.002693  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.003296  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.003624  487076 addons.go:69] Setting volumesnapshots=true in profile "addons-681393"
	I1027 21:54:26.003651  487076 addons.go:238] Setting addon volumesnapshots=true in "addons-681393"
	I1027 21:54:26.003682  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.003893  487076 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:54:26.004143  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.004155  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.004548  487076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 21:54:26.004750  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:25.999403  487076 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:54:26.004880  487076 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-681393"
	I1027 21:54:26.004906  487076 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-681393"
	I1027 21:54:26.005071  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.005493  487076 addons.go:69] Setting ingress-dns=true in profile "addons-681393"
	I1027 21:54:26.005513  487076 addons.go:238] Setting addon ingress-dns=true in "addons-681393"
	I1027 21:54:26.005531  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.005545  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.005578  487076 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-681393"
	I1027 21:54:26.005604  487076 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-681393"
	I1027 21:54:26.006035  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.006403  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.008083  487076 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-681393"
	I1027 21:54:26.008118  487076 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-681393"
	I1027 21:54:26.008155  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.008629  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.009107  487076 addons.go:69] Setting cloud-spanner=true in profile "addons-681393"
	I1027 21:54:26.009133  487076 addons.go:238] Setting addon cloud-spanner=true in "addons-681393"
	I1027 21:54:26.009163  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.012773  487076 addons.go:69] Setting registry=true in profile "addons-681393"
	I1027 21:54:26.013052  487076 addons.go:238] Setting addon registry=true in "addons-681393"
	I1027 21:54:26.013217  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.014689  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.014873  487076 addons.go:69] Setting registry-creds=true in profile "addons-681393"
	I1027 21:54:26.015776  487076 addons.go:238] Setting addon registry-creds=true in "addons-681393"
	I1027 21:54:26.015810  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.017500  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.018572  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.051816  487076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 21:54:26.054028  487076 addons.go:238] Setting addon default-storageclass=true in "addons-681393"
	I1027 21:54:26.054079  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.054573  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	W1027 21:54:26.056063  487076 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 21:54:26.056076  487076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 21:54:26.057215  487076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 21:54:26.058587  487076 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 21:54:26.058608  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 21:54:26.058665  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.090496  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 21:54:26.097478  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 21:54:26.097518  487076 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 21:54:26.097592  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.097616  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.101923  487076 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 21:54:26.101978  487076 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 21:54:26.104546  487076 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 21:54:26.104645  487076 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 21:54:26.104659  487076 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 21:54:26.104807  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.106276  487076 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 21:54:26.106296  487076 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 21:54:26.106357  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.107841  487076 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 21:54:26.107866  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 21:54:26.107917  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.114505  487076 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 21:54:26.115738  487076 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1027 21:54:26.116291  487076 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 21:54:26.116934  487076 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 21:54:26.116965  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 21:54:26.117052  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.117490  487076 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 21:54:26.117513  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 21:54:26.117569  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.119455  487076 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 21:54:26.119472  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 21:54:26.119528  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.123791  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 21:54:26.124787  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 21:54:26.125851  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 21:54:26.131940  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 21:54:26.133970  487076 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 21:54:26.134889  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 21:54:26.136129  487076 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 21:54:26.136147  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 21:54:26.136214  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.137514  487076 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 21:54:26.137532  487076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 21:54:26.137586  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.139202  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 21:54:26.140136  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 21:54:26.141042  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 21:54:26.141848  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 21:54:26.141864  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 21:54:26.141933  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.146888  487076 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 21:54:26.147051  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.147909  487076 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 21:54:26.147929  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 21:54:26.148004  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.155080  487076 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 21:54:26.159694  487076 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 21:54:26.160634  487076 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 21:54:26.160654  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 21:54:26.160737  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.164655  487076 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-681393"
	I1027 21:54:26.164773  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.165343  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.167652  487076 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 21:54:26.168524  487076 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 21:54:26.168548  487076 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 21:54:26.168617  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.170355  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.175618  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.186604  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.192570  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.195146  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.206205  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.219706  487076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 21:54:26.223354  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.224572  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.224971  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.225636  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.226045  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.226494  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.226543  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.228030  487076 out.go:179]   - Using image docker.io/busybox:stable
	I1027 21:54:26.229057  487076 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W1027 21:54:26.229606  487076 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 21:54:26.229832  487076 retry.go:31] will retry after 351.996233ms: ssh: handshake failed: EOF
	I1027 21:54:26.230124  487076 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 21:54:26.230142  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 21:54:26.230199  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	W1027 21:54:26.230405  487076 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 21:54:26.230421  487076 retry.go:31] will retry after 203.89578ms: ssh: handshake failed: EOF
	I1027 21:54:26.244859  487076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 21:54:26.268279  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.305619  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 21:54:26.349470  487076 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 21:54:26.349677  487076 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 21:54:26.359778  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 21:54:26.362183  487076 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 21:54:26.362269  487076 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 21:54:26.369114  487076 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 21:54:26.369197  487076 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 21:54:26.386414  487076 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 21:54:26.386439  487076 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 21:54:26.396188  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 21:54:26.396485  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 21:54:26.398751  487076 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:26.398771  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 21:54:26.399116  487076 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 21:54:26.399132  487076 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 21:54:26.410562  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 21:54:26.411036  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 21:54:26.412599  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 21:54:26.416251  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 21:54:26.426661  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 21:54:26.427263  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 21:54:26.431609  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 21:54:26.440733  487076 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 21:54:26.440830  487076 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 21:54:26.441548  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:26.449414  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 21:54:26.449528  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 21:54:26.452167  487076 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 21:54:26.452271  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 21:54:26.482155  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 21:54:26.482256  487076 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 21:54:26.497932  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 21:54:26.502379  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 21:54:26.502422  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 21:54:26.558572  487076 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 21:54:26.558596  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 21:54:26.562273  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 21:54:26.562298  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 21:54:26.614247  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 21:54:26.621841  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 21:54:26.621872  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 21:54:26.666686  487076 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1027 21:54:26.668789  487076 node_ready.go:35] waiting up to 6m0s for node "addons-681393" to be "Ready" ...
	I1027 21:54:26.678996  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 21:54:26.679087  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 21:54:26.703739  487076 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 21:54:26.703824  487076 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 21:54:26.736559  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 21:54:26.736670  487076 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 21:54:26.818543  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 21:54:26.818651  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 21:54:26.826759  487076 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 21:54:26.826786  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 21:54:26.879083  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 21:54:26.884753  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 21:54:26.884866  487076 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 21:54:26.884978  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 21:54:26.885065  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 21:54:26.917299  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 21:54:26.917327  487076 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 21:54:26.939192  487076 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 21:54:26.939294  487076 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 21:54:26.967873  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 21:54:26.995684  487076 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 21:54:26.995713  487076 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 21:54:27.032605  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 21:54:27.178511  487076 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-681393" context rescaled to 1 replicas
	I1027 21:54:27.585638  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.279976052s)
	I1027 21:54:27.585681  487076 addons.go:479] Verifying addon ingress=true in "addons-681393"
	I1027 21:54:27.585712  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.225825916s)
	I1027 21:54:27.585802  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.175202189s)
	I1027 21:54:27.585882  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174826744s)
	I1027 21:54:27.585956  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.173326464s)
	I1027 21:54:27.586096  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.169821672s)
	I1027 21:54:27.586191  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.159504064s)
	I1027 21:54:27.586223  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.158936874s)
	I1027 21:54:27.586273  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.154586466s)
	I1027 21:54:27.586357  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.144790022s)
	W1027 21:54:27.586385  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:27.586478  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.088501405s)
	I1027 21:54:27.586514  487076 retry.go:31] will retry after 244.379416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:27.587048  487076 out.go:179] * Verifying ingress addon...
	I1027 21:54:27.588189  487076 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-681393 service yakd-dashboard -n yakd-dashboard
	
	I1027 21:54:27.589608  487076 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 21:54:27.592983  487076 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W1027 21:54:27.594591  487076 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1027 21:54:27.831690  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:28.040107  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.425736077s)
	I1027 21:54:28.040133  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161021338s)
	I1027 21:54:28.040161  487076 addons.go:479] Verifying addon registry=true in "addons-681393"
	W1027 21:54:28.040156  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 21:54:28.040187  487076 retry.go:31] will retry after 258.257987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 21:54:28.040498  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.070965807s)
	I1027 21:54:28.040542  487076 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-681393"
	I1027 21:54:28.040621  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.007972038s)
	I1027 21:54:28.040803  487076 addons.go:479] Verifying addon metrics-server=true in "addons-681393"
	I1027 21:54:28.042231  487076 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 21:54:28.042289  487076 out.go:179] * Verifying registry addon...
	I1027 21:54:28.044981  487076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 21:54:28.045010  487076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 21:54:28.048233  487076 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 21:54:28.048256  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:28.048376  487076 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 21:54:28.048399  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:28.094416  487076 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 21:54:28.094450  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:28.298923  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1027 21:54:28.465257  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:28.465303  487076 retry.go:31] will retry after 437.855148ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:28.548535  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:28.548647  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:28.650118  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:54:28.671977  487076 node_ready.go:57] node "addons-681393" has "Ready":"False" status (will retry)
	I1027 21:54:28.904172  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:29.050059  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:29.050178  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:29.093177  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:29.548829  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:29.548892  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:29.595080  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:30.048693  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:30.048702  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:30.092733  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:30.548991  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:30.549244  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:30.649268  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:54:30.672165  487076 node_ready.go:57] node "addons-681393" has "Ready":"False" status (will retry)
	I1027 21:54:30.822047  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.523075276s)
	I1027 21:54:30.822141  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.91792959s)
	W1027 21:54:30.822176  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:30.822206  487076 retry.go:31] will retry after 748.96164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:31.048392  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:31.048414  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:31.093810  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:31.549354  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:31.549367  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:31.572291  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:31.650741  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:32.048847  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:32.048975  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:32.092565  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:54:32.138096  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:32.138133  487076 retry.go:31] will retry after 945.564038ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:32.548101  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:32.548242  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:32.648928  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:33.048886  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:33.049025  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:33.083879  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:33.093390  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:54:33.171720  487076 node_ready.go:57] node "addons-681393" has "Ready":"False" status (will retry)
	I1027 21:54:33.548930  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:33.549212  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 21:54:33.629875  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:33.629908  487076 retry.go:31] will retry after 1.192517493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:33.649933  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:33.754496  487076 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 21:54:33.754563  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:33.771674  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:33.887096  487076 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 21:54:33.899784  487076 addons.go:238] Setting addon gcp-auth=true in "addons-681393"
	I1027 21:54:33.899838  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:33.900233  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:33.917593  487076 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 21:54:33.917642  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:33.934090  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:34.031873  487076 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 21:54:34.032828  487076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 21:54:34.033656  487076 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 21:54:34.033670  487076 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 21:54:34.046837  487076 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 21:54:34.046855  487076 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 21:54:34.049196  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:34.049274  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:34.059873  487076 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 21:54:34.059891  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 21:54:34.072363  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 21:54:34.093630  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:34.373904  487076 addons.go:479] Verifying addon gcp-auth=true in "addons-681393"
	I1027 21:54:34.374967  487076 out.go:179] * Verifying gcp-auth addon...
	I1027 21:54:34.376435  487076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 21:54:34.378712  487076 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 21:54:34.378727  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:34.547742  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:34.547873  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:34.592748  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:34.822612  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:34.880140  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:35.047876  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:35.047882  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:35.093028  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:54:35.172087  487076 node_ready.go:57] node "addons-681393" has "Ready":"False" status (will retry)
	W1027 21:54:35.363491  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:35.363523  487076 retry.go:31] will retry after 1.901536998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:35.379375  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:35.548228  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:35.548241  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:35.592825  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:35.879850  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:36.048821  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:36.048840  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:36.093351  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:36.379752  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:36.548918  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:36.549039  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:36.593069  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:36.880354  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:37.048233  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:37.048248  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:37.093091  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:37.265807  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:37.379995  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:37.549253  487076 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 21:54:37.549279  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:37.549446  487076 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 21:54:37.549469  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:37.593475  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:37.671425  487076 node_ready.go:49] node "addons-681393" is "Ready"
	I1027 21:54:37.671454  487076 node_ready.go:38] duration metric: took 11.002633613s for node "addons-681393" to be "Ready" ...
	I1027 21:54:37.671483  487076 api_server.go:52] waiting for apiserver process to appear ...
	I1027 21:54:37.671536  487076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 21:54:37.880095  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 21:54:38.035710  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:38.035739  487076 api_server.go:72] duration metric: took 12.036553087s to wait for apiserver process to appear ...
	I1027 21:54:38.035747  487076 retry.go:31] will retry after 4.250585418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:38.035754  487076 api_server.go:88] waiting for apiserver healthz status ...
	I1027 21:54:38.035776  487076 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1027 21:54:38.040938  487076 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1027 21:54:38.042252  487076 api_server.go:141] control plane version: v1.34.1
	I1027 21:54:38.042285  487076 api_server.go:131] duration metric: took 6.521806ms to wait for apiserver health ...
	I1027 21:54:38.042297  487076 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 21:54:38.048605  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:38.048639  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:38.049728  487076 system_pods.go:59] 20 kube-system pods found
	I1027 21:54:38.049771  487076 system_pods.go:61] "amd-gpu-device-plugin-txrzm" [24503293-388b-4873-bc11-107a24f28f57] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 21:54:38.049780  487076 system_pods.go:61] "coredns-66bc5c9577-8pt79" [87832036-6af9-4dc9-9b16-1bcf3671b894] Running
	I1027 21:54:38.049795  487076 system_pods.go:61] "csi-hostpath-attacher-0" [ea66be78-f7b8-4684-b477-b41500f5e426] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 21:54:38.049814  487076 system_pods.go:61] "csi-hostpath-resizer-0" [42938fd1-8761-4f67-874e-41d6224778a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 21:54:38.049826  487076 system_pods.go:61] "csi-hostpathplugin-p5sgs" [ab3b75d3-2e4b-408e-9216-3d162a34c2d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 21:54:38.049836  487076 system_pods.go:61] "etcd-addons-681393" [4a99290c-ab6d-40bc-a014-b5e2a655d0ff] Running
	I1027 21:54:38.049842  487076 system_pods.go:61] "kindnet-5g7gz" [a82f4737-bdb6-4fc8-803d-afa31237a5a0] Running
	I1027 21:54:38.049856  487076 system_pods.go:61] "kube-apiserver-addons-681393" [013f5a64-e0b0-4aaa-bb65-8f9230b5b663] Running
	I1027 21:54:38.049865  487076 system_pods.go:61] "kube-controller-manager-addons-681393" [3b41da40-aeb6-4896-bc6d-59c3b1d565c4] Running
	I1027 21:54:38.049874  487076 system_pods.go:61] "kube-ingress-dns-minikube" [b88574d6-394b-4266-a1a1-191b7686c64e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 21:54:38.049883  487076 system_pods.go:61] "kube-proxy-9nhv5" [dbc6ef4d-5de8-4e7f-a6ee-e79d3c8afe68] Running
	I1027 21:54:38.049889  487076 system_pods.go:61] "kube-scheduler-addons-681393" [5f8387c3-53fc-4f5a-88c8-ee8f38995cf5] Running
	I1027 21:54:38.049904  487076 system_pods.go:61] "metrics-server-85b7d694d7-nkkls" [1c66ed47-adbe-4977-9533-1e61982c1a89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 21:54:38.049918  487076 system_pods.go:61] "nvidia-device-plugin-daemonset-b6l7g" [8b67eb48-9663-4ec3-80d1-e64a4bf563b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 21:54:38.049955  487076 system_pods.go:61] "registry-6b586f9694-2tqh6" [6564a666-6603-4044-a2e5-b9e4e0700c5f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 21:54:38.049975  487076 system_pods.go:61] "registry-creds-764b6fb674-c2f45" [5300554b-ec19-4eb4-b416-d72d05fb4df5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 21:54:38.049989  487076 system_pods.go:61] "registry-proxy-wx6pv" [99f13eb6-27b7-4b76-9ed8-62ee24257d3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 21:54:38.050014  487076 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gsfdg" [38c78fd4-7ab1-447c-9e61-598336101feb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:54:38.050031  487076 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n8gmp" [0d98783a-2704-44c2-b6ed-9381e131cc3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:54:38.050043  487076 system_pods.go:61] "storage-provisioner" [8c0989d1-35b5-4024-89f3-6df94b9f2d77] Running
	I1027 21:54:38.050062  487076 system_pods.go:74] duration metric: took 7.75011ms to wait for pod list to return data ...
	I1027 21:54:38.050176  487076 default_sa.go:34] waiting for default service account to be created ...
	I1027 21:54:38.053370  487076 default_sa.go:45] found service account: "default"
	I1027 21:54:38.053393  487076 default_sa.go:55] duration metric: took 3.168409ms for default service account to be created ...
	I1027 21:54:38.053404  487076 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 21:54:38.057671  487076 system_pods.go:86] 20 kube-system pods found
	I1027 21:54:38.057699  487076 system_pods.go:89] "amd-gpu-device-plugin-txrzm" [24503293-388b-4873-bc11-107a24f28f57] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 21:54:38.057706  487076 system_pods.go:89] "coredns-66bc5c9577-8pt79" [87832036-6af9-4dc9-9b16-1bcf3671b894] Running
	I1027 21:54:38.057716  487076 system_pods.go:89] "csi-hostpath-attacher-0" [ea66be78-f7b8-4684-b477-b41500f5e426] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 21:54:38.057727  487076 system_pods.go:89] "csi-hostpath-resizer-0" [42938fd1-8761-4f67-874e-41d6224778a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 21:54:38.057737  487076 system_pods.go:89] "csi-hostpathplugin-p5sgs" [ab3b75d3-2e4b-408e-9216-3d162a34c2d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 21:54:38.057746  487076 system_pods.go:89] "etcd-addons-681393" [4a99290c-ab6d-40bc-a014-b5e2a655d0ff] Running
	I1027 21:54:38.057752  487076 system_pods.go:89] "kindnet-5g7gz" [a82f4737-bdb6-4fc8-803d-afa31237a5a0] Running
	I1027 21:54:38.057760  487076 system_pods.go:89] "kube-apiserver-addons-681393" [013f5a64-e0b0-4aaa-bb65-8f9230b5b663] Running
	I1027 21:54:38.057765  487076 system_pods.go:89] "kube-controller-manager-addons-681393" [3b41da40-aeb6-4896-bc6d-59c3b1d565c4] Running
	I1027 21:54:38.057776  487076 system_pods.go:89] "kube-ingress-dns-minikube" [b88574d6-394b-4266-a1a1-191b7686c64e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 21:54:38.057781  487076 system_pods.go:89] "kube-proxy-9nhv5" [dbc6ef4d-5de8-4e7f-a6ee-e79d3c8afe68] Running
	I1027 21:54:38.057790  487076 system_pods.go:89] "kube-scheduler-addons-681393" [5f8387c3-53fc-4f5a-88c8-ee8f38995cf5] Running
	I1027 21:54:38.057797  487076 system_pods.go:89] "metrics-server-85b7d694d7-nkkls" [1c66ed47-adbe-4977-9533-1e61982c1a89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 21:54:38.057807  487076 system_pods.go:89] "nvidia-device-plugin-daemonset-b6l7g" [8b67eb48-9663-4ec3-80d1-e64a4bf563b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 21:54:38.057822  487076 system_pods.go:89] "registry-6b586f9694-2tqh6" [6564a666-6603-4044-a2e5-b9e4e0700c5f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 21:54:38.057831  487076 system_pods.go:89] "registry-creds-764b6fb674-c2f45" [5300554b-ec19-4eb4-b416-d72d05fb4df5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 21:54:38.057841  487076 system_pods.go:89] "registry-proxy-wx6pv" [99f13eb6-27b7-4b76-9ed8-62ee24257d3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 21:54:38.057850  487076 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gsfdg" [38c78fd4-7ab1-447c-9e61-598336101feb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:54:38.057862  487076 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n8gmp" [0d98783a-2704-44c2-b6ed-9381e131cc3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:54:38.057867  487076 system_pods.go:89] "storage-provisioner" [8c0989d1-35b5-4024-89f3-6df94b9f2d77] Running
	I1027 21:54:38.057879  487076 system_pods.go:126] duration metric: took 4.46821ms to wait for k8s-apps to be running ...
	I1027 21:54:38.057890  487076 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 21:54:38.057956  487076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 21:54:38.075036  487076 system_svc.go:56] duration metric: took 17.132507ms WaitForService to wait for kubelet
	I1027 21:54:38.075075  487076 kubeadm.go:587] duration metric: took 12.075889968s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 21:54:38.075102  487076 node_conditions.go:102] verifying NodePressure condition ...
	I1027 21:54:38.078220  487076 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 21:54:38.078257  487076 node_conditions.go:123] node cpu capacity is 8
	I1027 21:54:38.078275  487076 node_conditions.go:105] duration metric: took 3.167192ms to run NodePressure ...
	I1027 21:54:38.078291  487076 start.go:242] waiting for startup goroutines ...
	I1027 21:54:38.146246  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:38.380506  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:38.548682  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:38.548859  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:38.592828  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:38.880391  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:39.049055  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:39.049185  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:39.150140  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:39.380852  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:39.549018  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:39.549089  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:39.592694  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:39.880081  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:40.048263  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:40.048511  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:40.093423  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:40.381193  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:40.548913  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:40.549039  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:40.593974  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:40.880376  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:41.048759  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:41.048974  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:41.150352  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:41.380055  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:41.548050  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:41.548172  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:41.593288  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:41.880737  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:42.049261  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:42.049337  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:42.093479  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:42.286611  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:42.380300  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:42.548717  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:42.548880  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:42.592714  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:42.880062  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 21:54:43.022283  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:43.022319  487076 retry.go:31] will retry after 2.511992341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:43.048388  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:43.048554  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:43.093885  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:43.380058  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:43.549840  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:43.550095  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:43.650747  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:43.880703  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:44.049756  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:44.049808  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:44.092793  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:44.380213  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:44.548266  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:44.548273  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:44.592850  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:44.879559  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:45.048770  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:45.048864  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:45.092681  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:45.379876  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:45.535094  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:45.549390  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:45.549520  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:45.593452  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:45.880707  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:46.052082  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:46.052777  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:46.094654  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:46.380595  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 21:54:46.436466  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:46.436504  487076 retry.go:31] will retry after 5.042254322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:46.549503  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:46.549590  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:46.593854  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:46.880328  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:47.074301  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:47.074384  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:47.093349  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:47.380702  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:47.549307  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:47.549565  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:47.593040  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:47.881074  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:48.048787  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:48.048811  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:48.093285  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:48.381213  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:48.549034  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:48.549138  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:48.593553  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:48.880068  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:49.048804  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:49.049100  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:49.094056  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:49.380969  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:49.549132  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:49.549281  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:49.593027  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:49.880696  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:50.050077  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:50.050186  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:50.093833  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:50.380047  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:50.549937  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:50.550119  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:50.651577  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:50.880065  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:51.049590  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:51.049793  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:51.093433  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:51.379870  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:51.478988  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:51.548966  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:51.549032  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:51.593242  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:51.880325  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:52.049563  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:52.049650  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 21:54:52.058878  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:52.058916  487076 retry.go:31] will retry after 8.574760051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:52.093439  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:52.379498  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:52.548666  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:52.548779  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:52.594163  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:52.880159  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:53.048268  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:53.048332  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:53.093496  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:53.379961  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:53.549067  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:53.549293  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:53.592874  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:53.880772  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:54.049068  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:54.049189  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:54.093117  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:54.381017  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:54.549786  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:54.550665  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:54.593286  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:54.879969  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:55.048795  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:55.048939  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:55.092265  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:55.380597  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:55.548847  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:55.548999  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:55.592560  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:55.879367  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:56.048571  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:56.048643  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:56.093021  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:56.380090  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:56.548204  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:56.548249  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:56.592892  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:56.879663  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:57.049061  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:57.049274  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:57.093814  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:57.379987  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:57.549339  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:57.549408  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:57.593877  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:57.880333  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:58.049554  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:58.049649  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:58.150519  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:58.380690  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:58.549106  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:58.549182  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:58.592792  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:58.880000  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:59.048400  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:59.048899  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:59.093797  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:59.380813  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:59.548906  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:59.549108  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:59.592921  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:59.880646  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:00.049180  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:00.049238  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:00.150362  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:00.380727  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:00.548697  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:00.548808  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:00.593564  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:00.634688  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:55:00.886398  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:01.049111  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:01.049516  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:01.093676  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:55:01.260311  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:55:01.260351  487076 retry.go:31] will retry after 19.680128198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:55:01.380617  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:01.549466  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:01.549493  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:01.593901  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:01.880106  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:02.049621  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:02.050195  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:02.151265  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:02.380056  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:02.548074  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:02.548180  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:02.592730  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:02.879662  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:03.048752  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:03.048814  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:03.092583  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:03.379519  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:03.548508  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:03.548573  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:03.593008  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:03.879898  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:04.049697  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:04.049852  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:04.094425  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:04.379549  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:04.548879  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:04.549368  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:04.593275  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:04.881367  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:05.048383  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:05.048433  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:05.093534  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:05.379848  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:05.549016  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:05.549035  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:05.593645  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:05.879792  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:06.049716  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:06.049755  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:06.150342  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:06.380397  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:06.548935  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:06.549125  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:06.592727  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:06.880609  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:07.049526  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:07.049596  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:07.150647  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:07.380027  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:07.548221  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:07.548236  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:07.592849  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:07.879781  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:08.049271  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:08.049295  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:08.093329  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:08.380804  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:08.549197  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:08.549285  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:08.593193  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:08.880471  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:09.049200  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:09.049259  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:09.093332  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:09.379725  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:09.549455  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:09.549547  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:09.593517  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:09.880289  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:10.048394  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:10.048642  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:10.093659  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:10.380099  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:10.549078  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:10.549217  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:10.593177  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:10.881773  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:11.049114  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:11.049370  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:11.094126  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:11.380768  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:11.549246  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:11.549314  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:11.593386  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:11.879938  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:12.050153  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:12.050211  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:12.093651  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:12.380091  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:12.548182  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:12.548293  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:12.593276  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:12.880527  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:13.048896  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:13.049077  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:13.093275  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:13.381037  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:13.549880  487076 kapi.go:107] duration metric: took 45.504869179s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 21:55:13.550165  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:13.593038  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:13.880451  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:14.049561  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:14.093757  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:14.381109  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:14.548452  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:14.592963  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:14.880874  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:15.050125  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:15.093467  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:15.379456  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:15.548933  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:15.592917  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:15.880469  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:16.049734  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:16.150399  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:16.380974  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:16.549369  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:16.593684  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:16.882332  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:17.051924  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:17.097792  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:17.382073  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:17.550016  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:17.594163  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:17.880575  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:18.050229  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:18.093701  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:18.379569  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:18.549214  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:18.593888  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:18.880679  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:19.049234  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:19.093887  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:19.380648  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:19.549380  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:19.593868  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:19.880882  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:20.050104  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:20.093253  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:20.380439  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:20.548851  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:20.594190  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:20.880778  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:20.940983  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:55:21.048666  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:21.093914  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:21.380825  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 21:55:21.495373  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:55:21.495424  487076 retry.go:31] will retry after 27.181488911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:55:21.548498  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:21.593103  487076 kapi.go:107] duration metric: took 54.003494051s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 21:55:21.880328  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:22.048448  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:22.379833  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:22.549093  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:22.880397  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:23.048765  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:23.379828  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:23.549215  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:23.880535  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:24.049243  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:24.380745  487076 kapi.go:107] duration metric: took 50.004302519s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 21:55:24.381570  487076 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-681393 cluster.
	I1027 21:55:24.382744  487076 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 21:55:24.383980  487076 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 21:55:24.550099  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:25.049386  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:25.549513  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:26.129220  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:26.548959  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:27.049120  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:27.548575  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:28.049084  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:28.549922  487076 kapi.go:107] duration metric: took 1m0.504939819s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 21:55:48.678804  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1027 21:55:49.239863  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 21:55:49.240006  487076 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 21:55:49.241280  487076 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, storage-provisioner, ingress-dns, nvidia-device-plugin, registry-creds, yakd, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1027 21:55:49.242187  487076 addons.go:514] duration metric: took 1m23.242980308s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin storage-provisioner ingress-dns nvidia-device-plugin registry-creds yakd storage-provisioner-rancher metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1027 21:55:49.242249  487076 start.go:247] waiting for cluster config update ...
	I1027 21:55:49.242276  487076 start.go:256] writing updated cluster config ...
	I1027 21:55:49.242629  487076 ssh_runner.go:195] Run: rm -f paused
	I1027 21:55:49.246829  487076 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 21:55:49.250827  487076 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8pt79" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.255806  487076 pod_ready.go:94] pod "coredns-66bc5c9577-8pt79" is "Ready"
	I1027 21:55:49.255831  487076 pod_ready.go:86] duration metric: took 4.974565ms for pod "coredns-66bc5c9577-8pt79" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.258000  487076 pod_ready.go:83] waiting for pod "etcd-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.261876  487076 pod_ready.go:94] pod "etcd-addons-681393" is "Ready"
	I1027 21:55:49.261899  487076 pod_ready.go:86] duration metric: took 3.87761ms for pod "etcd-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.263770  487076 pod_ready.go:83] waiting for pod "kube-apiserver-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.267327  487076 pod_ready.go:94] pod "kube-apiserver-addons-681393" is "Ready"
	I1027 21:55:49.267348  487076 pod_ready.go:86] duration metric: took 3.55949ms for pod "kube-apiserver-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.269155  487076 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.650974  487076 pod_ready.go:94] pod "kube-controller-manager-addons-681393" is "Ready"
	I1027 21:55:49.651006  487076 pod_ready.go:86] duration metric: took 381.83076ms for pod "kube-controller-manager-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.851729  487076 pod_ready.go:83] waiting for pod "kube-proxy-9nhv5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:50.251484  487076 pod_ready.go:94] pod "kube-proxy-9nhv5" is "Ready"
	I1027 21:55:50.251517  487076 pod_ready.go:86] duration metric: took 399.75771ms for pod "kube-proxy-9nhv5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:50.451444  487076 pod_ready.go:83] waiting for pod "kube-scheduler-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:50.850837  487076 pod_ready.go:94] pod "kube-scheduler-addons-681393" is "Ready"
	I1027 21:55:50.850914  487076 pod_ready.go:86] duration metric: took 399.399412ms for pod "kube-scheduler-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:50.850963  487076 pod_ready.go:40] duration metric: took 1.604073115s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 21:55:50.898674  487076 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 21:55:50.901032  487076 out.go:179] * Done! kubectl is now configured to use "addons-681393" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.903707073Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.903753513Z" level=info msg="Removed pod sandbox: 1804805d9248dfbf9d59ca431b88ceb5be6a33d2a1ee9938ad7cb8111c044128" id=5ed5b053-6f86-4b09-802d-80ddd9fae47c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.904074023Z" level=info msg="Stopping pod sandbox: 2279130517c1c91b458e5311a4c2b3c3a85c2c312727bf694b9ffde801742353" id=8b0e7c77-8db7-4567-ba20-32152147dab1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.904118928Z" level=info msg="Stopped pod sandbox (already stopped): 2279130517c1c91b458e5311a4c2b3c3a85c2c312727bf694b9ffde801742353" id=8b0e7c77-8db7-4567-ba20-32152147dab1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.904451231Z" level=info msg="Removing pod sandbox: 2279130517c1c91b458e5311a4c2b3c3a85c2c312727bf694b9ffde801742353" id=04fa5f22-7716-40a9-a4ba-40aa7d53361b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.906909355Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.906972148Z" level=info msg="Removed pod sandbox: 2279130517c1c91b458e5311a4c2b3c3a85c2c312727bf694b9ffde801742353" id=04fa5f22-7716-40a9-a4ba-40aa7d53361b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.90730744Z" level=info msg="Stopping pod sandbox: b96e019674aa85af94a1627c4aeff61291ef3101a1acdc40c46bd3adf026e8d6" id=2da25b91-173c-4023-a4eb-cb89155d3451 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.907352427Z" level=info msg="Stopped pod sandbox (already stopped): b96e019674aa85af94a1627c4aeff61291ef3101a1acdc40c46bd3adf026e8d6" id=2da25b91-173c-4023-a4eb-cb89155d3451 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.907625388Z" level=info msg="Removing pod sandbox: b96e019674aa85af94a1627c4aeff61291ef3101a1acdc40c46bd3adf026e8d6" id=0b573676-4e61-4cd5-91f4-e6d54c5156c8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.91055292Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 21:57:19 addons-681393 crio[775]: time="2025-10-27T21:57:19.910597012Z" level=info msg="Removed pod sandbox: b96e019674aa85af94a1627c4aeff61291ef3101a1acdc40c46bd3adf026e8d6" id=0b573676-4e61-4cd5-91f4-e6d54c5156c8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.189453785Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-lhjkw/POD" id=6c6e4253-7668-4fe8-930b-a312b3350dcb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.18957433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.196926835Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-lhjkw Namespace:default ID:2a6979d16020976be1da004cd7872040739db3af5413bc26171327715b3608ce UID:fd5d547a-b73a-48e0-9d74-40f8f44a6c50 NetNS:/var/run/netns/a9b3ccc5-39bc-418a-9ddf-c6745831748a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000888488}] Aliases:map[]}"
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.196986347Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-lhjkw to CNI network \"kindnet\" (type=ptp)"
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.208178429Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-lhjkw Namespace:default ID:2a6979d16020976be1da004cd7872040739db3af5413bc26171327715b3608ce UID:fd5d547a-b73a-48e0-9d74-40f8f44a6c50 NetNS:/var/run/netns/a9b3ccc5-39bc-418a-9ddf-c6745831748a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000888488}] Aliases:map[]}"
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.208326285Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-lhjkw for CNI network kindnet (type=ptp)"
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.209335289Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.210230361Z" level=info msg="Ran pod sandbox 2a6979d16020976be1da004cd7872040739db3af5413bc26171327715b3608ce with infra container: default/hello-world-app-5d498dc89-lhjkw/POD" id=6c6e4253-7668-4fe8-930b-a312b3350dcb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.211715422Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f1a10f0d-5c09-45ef-8129-2e708cabc5a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.211926938Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=f1a10f0d-5c09-45ef-8129-2e708cabc5a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.211993223Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=f1a10f0d-5c09-45ef-8129-2e708cabc5a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.212730841Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=a73bebaf-f868-430a-9f83-431a43669dc8 name=/runtime.v1.ImageService/PullImage
	Oct 27 21:58:37 addons-681393 crio[775]: time="2025-10-27T21:58:37.222893572Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	7aad8afadd34f       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   31db7070c7ce4       registry-creds-764b6fb674-c2f45             kube-system
	26e0087437589       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago        Running             nginx                                    0                   8b034e7e4deb1       nginx                                       default
	2717d9026043f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   8b8f58837bf17       busybox                                     default
	2010575178c32       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago        Running             csi-snapshotter                          0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	59650918c62fb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	85ee742586776       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	f24d2cb4a2b58       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	182c62dbb6d73       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   fb5c9c6913299       gcp-auth-78565c9fb4-mqt6k                   gcp-auth
	3de48bac49627       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago        Running             controller                               0                   3337cb68fea08       ingress-nginx-controller-675c5ddd98-glp28   ingress-nginx
	6467e0e7a8c5b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	ff45bb62e13ce       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   e0da334faea8e       gadget-g4nwh                                gadget
	f5f70b0c5ec76       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   50bf471b22bee       registry-proxy-wx6pv                        kube-system
	9f32528dcb836       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   a6ada1620976a       amd-gpu-device-plugin-txrzm                 kube-system
	153647beb1594       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	5ddf0325ff467       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   ad5e56c614024       nvidia-device-plugin-daemonset-b6l7g        kube-system
	b847234d4f511       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   2bc2cd958bad3       snapshot-controller-7d9fbc56b8-n8gmp        kube-system
	4b58171ccaea0       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   6b268f837dfbc       csi-hostpath-attacher-0                     kube-system
	f55e91ef28796       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   a7bdc864e3136       csi-hostpath-resizer-0                      kube-system
	0a08d08180b3c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   741db9095fe2e       snapshot-controller-7d9fbc56b8-gsfdg        kube-system
	1b44a338b5f1a       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   dcaaed5f1068c       yakd-dashboard-5ff678cb9-2qn6r              yakd-dashboard
	aa4d992979360       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             3 minutes ago        Exited              patch                                    2                   51c82d56a6523       ingress-nginx-admission-patch-tglxq         ingress-nginx
	28e0d7defa53b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   cca1620056ab2       ingress-nginx-admission-create-crz97        ingress-nginx
	fb54ab1a61dad       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   c8bcf6008e0df       registry-6b586f9694-2tqh6                   kube-system
	7d170ca1d55a9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   7887c01d60310       local-path-provisioner-648f6765c9-nxsbb     local-path-storage
	00cc26010baa4       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   d76a1ca895668       kube-ingress-dns-minikube                   kube-system
	1f9c8cd6b818b       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago        Running             cloud-spanner-emulator                   0                   6cf54e016b481       cloud-spanner-emulator-86bd5cbb97-mjqsc     default
	37c2044b18ebd       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   cb12f969b1cd9       metrics-server-85b7d694d7-nkkls             kube-system
	49d0fe83e58c6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago        Running             coredns                                  0                   996ea730acecc       coredns-66bc5c9577-8pt79                    kube-system
	bd12cfcd64231       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago        Running             storage-provisioner                      0                   701d6c4bf6180       storage-provisioner                         kube-system
	27e7e39745889       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   1a0cafbdf33cb       kube-proxy-9nhv5                            kube-system
	65ad03529a586       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   875fe3a06b628       kindnet-5g7gz                               kube-system
	768d42a191bfa       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   cf1a0de9d9891       kube-controller-manager-addons-681393       kube-system
	9ca7e0d969e10       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   876ee064ef2dc       kube-apiserver-addons-681393                kube-system
	c7060ff537769       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   999d55a6c9def       etcd-addons-681393                          kube-system
	6924a158f2354       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   2efd01e5d650f       kube-scheduler-addons-681393                kube-system
	
	
	==> coredns [49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf] <==
	[INFO] 10.244.0.22:39975 - 46262 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.007410627s
	[INFO] 10.244.0.22:48425 - 2403 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003616018s
	[INFO] 10.244.0.22:46445 - 27836 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005057748s
	[INFO] 10.244.0.22:49343 - 25957 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004746444s
	[INFO] 10.244.0.22:36556 - 41327 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005947448s
	[INFO] 10.244.0.22:48922 - 60592 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001206754s
	[INFO] 10.244.0.22:46013 - 35587 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001380023s
	[INFO] 10.244.0.24:60854 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000328697s
	[INFO] 10.244.0.24:38311 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192703s
	[INFO] 10.244.0.31:56379 - 35180 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000253442s
	[INFO] 10.244.0.31:38949 - 32579 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000354165s
	[INFO] 10.244.0.31:46222 - 19553 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00016206s
	[INFO] 10.244.0.31:39373 - 20310 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000216509s
	[INFO] 10.244.0.31:54208 - 48300 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000122608s
	[INFO] 10.244.0.31:39468 - 62229 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000187052s
	[INFO] 10.244.0.31:49154 - 54305 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003180784s
	[INFO] 10.244.0.31:46116 - 12851 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.005737427s
	[INFO] 10.244.0.31:59465 - 59865 "AAAA IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.004950137s
	[INFO] 10.244.0.31:33279 - 12358 "A IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.006000691s
	[INFO] 10.244.0.31:50037 - 33772 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004742173s
	[INFO] 10.244.0.31:51819 - 62822 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005226394s
	[INFO] 10.244.0.31:53982 - 9672 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004605545s
	[INFO] 10.244.0.31:56284 - 11403 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00531383s
	[INFO] 10.244.0.31:33340 - 34506 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001550868s
	[INFO] 10.244.0.31:37143 - 7799 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001912235s
	
	
	==> describe nodes <==
	Name:               addons-681393
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-681393
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=addons-681393
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T21_54_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-681393
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-681393"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 21:54:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-681393
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 21:58:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 21:58:25 +0000   Mon, 27 Oct 2025 21:54:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 21:58:25 +0000   Mon, 27 Oct 2025 21:54:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 21:58:25 +0000   Mon, 27 Oct 2025 21:54:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 21:58:25 +0000   Mon, 27 Oct 2025 21:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-681393
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2756eef5-641f-4a79-a5ec-5fcab8f11b6e
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  default                     cloud-spanner-emulator-86bd5cbb97-mjqsc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  default                     hello-world-app-5d498dc89-lhjkw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-g4nwh                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  gcp-auth                    gcp-auth-78565c9fb4-mqt6k                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-glp28    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m11s
	  kube-system                 amd-gpu-device-plugin-txrzm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 coredns-66bc5c9577-8pt79                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m13s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 csi-hostpathplugin-p5sgs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 etcd-addons-681393                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m19s
	  kube-system                 kindnet-5g7gz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m13s
	  kube-system                 kube-apiserver-addons-681393                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-addons-681393        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-proxy-9nhv5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-addons-681393                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 metrics-server-85b7d694d7-nkkls              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m11s
	  kube-system                 nvidia-device-plugin-daemonset-b6l7g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 registry-6b586f9694-2tqh6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 registry-creds-764b6fb674-c2f45              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 registry-proxy-wx6pv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-gsfdg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 snapshot-controller-7d9fbc56b8-n8gmp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  local-path-storage          local-path-provisioner-648f6765c9-nxsbb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2qn6r               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m11s  kube-proxy       
	  Normal  Starting                 4m19s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m19s  kubelet          Node addons-681393 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s  kubelet          Node addons-681393 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s  kubelet          Node addons-681393 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m14s  node-controller  Node addons-681393 event: Registered Node addons-681393 in Controller
	  Normal  NodeReady                4m1s   kubelet          Node addons-681393 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267] <==
	{"level":"warn","ts":"2025-10-27T21:54:17.083134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.090568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.103095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.109326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.115731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.122010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.128996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.135309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.141832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.147756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.154074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.161103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.167464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.174447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.190674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.197216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.203511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.259792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:28.549366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:28.556123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35778","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T21:54:47.071185Z","caller":"traceutil/trace.go:172","msg":"trace[356343482] transaction","detail":"{read_only:false; response_revision:969; number_of_response:1; }","duration":"104.174849ms","start":"2025-10-27T21:54:46.966981Z","end":"2025-10-27T21:54:47.071156Z","steps":["trace[356343482] 'process raft request'  (duration: 71.691714ms)","trace[356343482] 'compare'  (duration: 32.352529ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T21:54:54.808923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:54.816358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:54.828008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:54.834303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37762","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [182c62dbb6d731bd7488f43383d3ab33b72c3391b631c2c3140cc458b433c19e] <==
	2025/10/27 21:55:24 GCP Auth Webhook started!
	2025/10/27 21:55:51 Ready to marshal response ...
	2025/10/27 21:55:51 Ready to write response ...
	2025/10/27 21:55:51 Ready to marshal response ...
	2025/10/27 21:55:51 Ready to write response ...
	2025/10/27 21:55:51 Ready to marshal response ...
	2025/10/27 21:55:51 Ready to write response ...
	2025/10/27 21:56:11 Ready to marshal response ...
	2025/10/27 21:56:11 Ready to write response ...
	2025/10/27 21:56:11 Ready to marshal response ...
	2025/10/27 21:56:11 Ready to write response ...
	2025/10/27 21:56:18 Ready to marshal response ...
	2025/10/27 21:56:18 Ready to write response ...
	2025/10/27 21:56:18 Ready to marshal response ...
	2025/10/27 21:56:18 Ready to write response ...
	2025/10/27 21:56:21 Ready to marshal response ...
	2025/10/27 21:56:21 Ready to write response ...
	2025/10/27 21:56:33 Ready to marshal response ...
	2025/10/27 21:56:33 Ready to write response ...
	2025/10/27 21:56:35 Ready to marshal response ...
	2025/10/27 21:56:35 Ready to write response ...
	2025/10/27 21:58:36 Ready to marshal response ...
	2025/10/27 21:58:36 Ready to write response ...
	
	
	==> kernel <==
	 21:58:38 up  1:40,  0 user,  load average: 0.45, 0.74, 15.37
	Linux addons-681393 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88] <==
	I1027 21:56:36.917691       1 main.go:301] handling current node
	I1027 21:56:46.918131       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:56:46.918169       1 main.go:301] handling current node
	I1027 21:56:56.918113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:56:56.918157       1 main.go:301] handling current node
	I1027 21:57:06.917685       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:57:06.917720       1 main.go:301] handling current node
	I1027 21:57:16.917665       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:57:16.917700       1 main.go:301] handling current node
	I1027 21:57:26.917654       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:57:26.917697       1 main.go:301] handling current node
	I1027 21:57:36.921667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:57:36.921704       1 main.go:301] handling current node
	I1027 21:57:46.916733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:57:46.916774       1 main.go:301] handling current node
	I1027 21:57:56.918858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:57:56.918903       1 main.go:301] handling current node
	I1027 21:58:06.923039       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:58:06.923078       1 main.go:301] handling current node
	I1027 21:58:16.916976       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:58:16.917015       1 main.go:301] handling current node
	I1027 21:58:26.917318       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:58:26.917354       1 main.go:301] handling current node
	I1027 21:58:36.916704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:58:36.916741       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10] <==
	W1027 21:54:41.958848       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 21:54:41.958928       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1027 21:54:41.958968       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1027 21:54:41.958977       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1027 21:54:41.960142       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1027 21:54:45.970362       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.192.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.192.22:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1027 21:54:45.973597       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 21:54:45.973655       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1027 21:54:45.996644       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1027 21:54:54.808874       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 21:54:54.816301       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 21:54:54.828015       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 21:54:54.834265       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1027 21:56:00.591419       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40994: use of closed network connection
	E1027 21:56:00.749642       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41028: use of closed network connection
	I1027 21:56:11.818336       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1027 21:56:12.012588       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.158.247"}
	I1027 21:56:31.982168       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1027 21:58:36.961426       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.96.214"}
	
	
	==> kube-controller-manager [768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74] <==
	I1027 21:54:24.791805       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 21:54:24.792018       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 21:54:24.792147       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 21:54:24.792336       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 21:54:24.792352       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 21:54:24.792490       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-681393"
	I1027 21:54:24.792554       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 21:54:24.792577       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 21:54:24.792593       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 21:54:24.792668       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 21:54:24.793363       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 21:54:24.793378       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 21:54:24.793435       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 21:54:24.793500       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 21:54:24.795730       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 21:54:24.795735       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 21:54:24.802421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 21:54:24.814247       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 21:54:27.549278       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1027 21:54:39.793794       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1027 21:54:54.801664       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 21:54:54.801743       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 21:54:54.821967       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 21:54:54.902676       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 21:54:54.922908       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12] <==
	I1027 21:54:26.369218       1 server_linux.go:53] "Using iptables proxy"
	I1027 21:54:26.641245       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 21:54:26.748247       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 21:54:26.748315       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 21:54:26.752694       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 21:54:27.005603       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 21:54:27.005791       1 server_linux.go:132] "Using iptables Proxier"
	I1027 21:54:27.013349       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 21:54:27.018523       1 server.go:527] "Version info" version="v1.34.1"
	I1027 21:54:27.020173       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 21:54:27.033556       1 config.go:200] "Starting service config controller"
	I1027 21:54:27.033589       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 21:54:27.034031       1 config.go:106] "Starting endpoint slice config controller"
	I1027 21:54:27.034051       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 21:54:27.034098       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 21:54:27.034106       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 21:54:27.034791       1 config.go:309] "Starting node config controller"
	I1027 21:54:27.034813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 21:54:27.034821       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 21:54:27.139832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 21:54:27.142568       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 21:54:27.140582       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567] <==
	E1027 21:54:17.672293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 21:54:17.672323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 21:54:17.672392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 21:54:17.672604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 21:54:17.672613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 21:54:17.672027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 21:54:17.673974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 21:54:17.674032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 21:54:17.674059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 21:54:17.673993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 21:54:17.673999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 21:54:17.674096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 21:54:17.674163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 21:54:18.480733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 21:54:18.502973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 21:54:18.625684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 21:54:18.670214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 21:54:18.686424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 21:54:18.771840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 21:54:18.780159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 21:54:18.841443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 21:54:18.864756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 21:54:18.875118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 21:54:18.897274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1027 21:54:21.170138       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.302415    1317 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719047e7-330e-46b1-96cf-42f60996ef54-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "719047e7-330e-46b1-96cf-42f60996ef54" (UID: "719047e7-330e-46b1-96cf-42f60996ef54"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.304900    1317 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719047e7-330e-46b1-96cf-42f60996ef54-kube-api-access-2zvhc" (OuterVolumeSpecName: "kube-api-access-2zvhc") pod "719047e7-330e-46b1-96cf-42f60996ef54" (UID: "719047e7-330e-46b1-96cf-42f60996ef54"). InnerVolumeSpecName "kube-api-access-2zvhc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.306414    1317 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^ce2c9c2e-b37f-11f0-999f-963db89368a2" (OuterVolumeSpecName: "task-pv-storage") pod "719047e7-330e-46b1-96cf-42f60996ef54" (UID: "719047e7-330e-46b1-96cf-42f60996ef54"). InnerVolumeSpecName "pvc-2fd73fce-c8eb-46df-8c42-43b598dfbecc". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.403083    1317 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/719047e7-330e-46b1-96cf-42f60996ef54-gcp-creds\") on node \"addons-681393\" DevicePath \"\""
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.403124    1317 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2zvhc\" (UniqueName: \"kubernetes.io/projected/719047e7-330e-46b1-96cf-42f60996ef54-kube-api-access-2zvhc\") on node \"addons-681393\" DevicePath \"\""
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.403182    1317 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-2fd73fce-c8eb-46df-8c42-43b598dfbecc\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ce2c9c2e-b37f-11f0-999f-963db89368a2\") on node \"addons-681393\" "
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.407673    1317 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-2fd73fce-c8eb-46df-8c42-43b598dfbecc" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^ce2c9c2e-b37f-11f0-999f-963db89368a2") on node "addons-681393"
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.504012    1317 reconciler_common.go:299] "Volume detached for volume \"pvc-2fd73fce-c8eb-46df-8c42-43b598dfbecc\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ce2c9c2e-b37f-11f0-999f-963db89368a2\") on node \"addons-681393\" DevicePath \"\""
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.518364    1317 scope.go:117] "RemoveContainer" containerID="054ead95a48f1b964a551fa2a3bd922d75004d2640607d15e856d3379d54564e"
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.528045    1317 scope.go:117] "RemoveContainer" containerID="054ead95a48f1b964a551fa2a3bd922d75004d2640607d15e856d3379d54564e"
	Oct 27 21:56:44 addons-681393 kubelet[1317]: E1027 21:56:44.528582    1317 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054ead95a48f1b964a551fa2a3bd922d75004d2640607d15e856d3379d54564e\": container with ID starting with 054ead95a48f1b964a551fa2a3bd922d75004d2640607d15e856d3379d54564e not found: ID does not exist" containerID="054ead95a48f1b964a551fa2a3bd922d75004d2640607d15e856d3379d54564e"
	Oct 27 21:56:44 addons-681393 kubelet[1317]: I1027 21:56:44.528635    1317 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054ead95a48f1b964a551fa2a3bd922d75004d2640607d15e856d3379d54564e"} err="failed to get container status \"054ead95a48f1b964a551fa2a3bd922d75004d2640607d15e856d3379d54564e\": rpc error: code = NotFound desc = could not find container \"054ead95a48f1b964a551fa2a3bd922d75004d2640607d15e856d3379d54564e\": container with ID starting with 054ead95a48f1b964a551fa2a3bd922d75004d2640607d15e856d3379d54564e not found: ID does not exist"
	Oct 27 21:56:45 addons-681393 kubelet[1317]: I1027 21:56:45.835926    1317 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719047e7-330e-46b1-96cf-42f60996ef54" path="/var/lib/kubelet/pods/719047e7-330e-46b1-96cf-42f60996ef54/volumes"
	Oct 27 21:57:06 addons-681393 kubelet[1317]: I1027 21:57:06.833354    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-2tqh6" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:57:19 addons-681393 kubelet[1317]: I1027 21:57:19.861429    1317 scope.go:117] "RemoveContainer" containerID="1e9758590c6e09bd2f9f8db218670dabeb81727d0fa2e0247ff143fa5dacca14"
	Oct 27 21:57:19 addons-681393 kubelet[1317]: I1027 21:57:19.870092    1317 scope.go:117] "RemoveContainer" containerID="40c8e20157071714ca7e7fbcc47bced6bdd27a81a42fbf5845cf49530e46eaf1"
	Oct 27 21:57:19 addons-681393 kubelet[1317]: I1027 21:57:19.878254    1317 scope.go:117] "RemoveContainer" containerID="dc17d2af91a67ec6b92a4ad798f003f91109ee5b7b9237c37ec0901a974d3819"
	Oct 27 21:57:23 addons-681393 kubelet[1317]: I1027 21:57:23.832755    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-b6l7g" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:57:26 addons-681393 kubelet[1317]: I1027 21:57:26.833306    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wx6pv" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:57:43 addons-681393 kubelet[1317]: I1027 21:57:43.833240    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-txrzm" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:58:30 addons-681393 kubelet[1317]: I1027 21:58:30.833631    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-2tqh6" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:58:31 addons-681393 kubelet[1317]: I1027 21:58:31.832855    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-b6l7g" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:58:36 addons-681393 kubelet[1317]: I1027 21:58:36.881371    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-c2f45" podStartSLOduration=248.59079477 podStartE2EDuration="4m10.881339349s" podCreationTimestamp="2025-10-27 21:54:26 +0000 UTC" firstStartedPulling="2025-10-27 21:56:54.855781436 +0000 UTC m=+155.112524567" lastFinishedPulling="2025-10-27 21:56:57.146326031 +0000 UTC m=+157.403069146" observedRunningTime="2025-10-27 21:56:57.585543014 +0000 UTC m=+157.842286151" watchObservedRunningTime="2025-10-27 21:58:36.881339349 +0000 UTC m=+257.138082485"
	Oct 27 21:58:36 addons-681393 kubelet[1317]: I1027 21:58:36.989392    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jz87\" (UniqueName: \"kubernetes.io/projected/fd5d547a-b73a-48e0-9d74-40f8f44a6c50-kube-api-access-6jz87\") pod \"hello-world-app-5d498dc89-lhjkw\" (UID: \"fd5d547a-b73a-48e0-9d74-40f8f44a6c50\") " pod="default/hello-world-app-5d498dc89-lhjkw"
	Oct 27 21:58:36 addons-681393 kubelet[1317]: I1027 21:58:36.989470    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fd5d547a-b73a-48e0-9d74-40f8f44a6c50-gcp-creds\") pod \"hello-world-app-5d498dc89-lhjkw\" (UID: \"fd5d547a-b73a-48e0-9d74-40f8f44a6c50\") " pod="default/hello-world-app-5d498dc89-lhjkw"
	
	
	==> storage-provisioner [bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d] <==
	W1027 21:58:14.712850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:16.715851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:16.719704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:18.723190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:18.727072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:20.730255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:20.734510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:22.737484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:22.741410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:24.744469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:24.748056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:26.750982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:26.754657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:28.757972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:28.763160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:30.766047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:30.770766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:32.774041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:32.779407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:34.782640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:34.786792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:36.790316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:36.794965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:38.799289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:58:38.803284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-681393 -n addons-681393
helpers_test.go:269: (dbg) Run:  kubectl --context addons-681393 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-crz97 ingress-nginx-admission-patch-tglxq
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-681393 describe pod ingress-nginx-admission-create-crz97 ingress-nginx-admission-patch-tglxq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-681393 describe pod ingress-nginx-admission-create-crz97 ingress-nginx-admission-patch-tglxq: exit status 1 (62.895143ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-crz97" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tglxq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-681393 describe pod ingress-nginx-admission-create-crz97 ingress-nginx-admission-patch-tglxq: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (264.243385ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:58:39.566370  501943 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:58:39.566698  501943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:58:39.566709  501943 out.go:374] Setting ErrFile to fd 2...
	I1027 21:58:39.566712  501943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:58:39.566975  501943 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:58:39.567290  501943 mustload.go:66] Loading cluster: addons-681393
	I1027 21:58:39.567631  501943 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:58:39.567646  501943 addons.go:606] checking whether the cluster is paused
	I1027 21:58:39.567720  501943 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:58:39.567733  501943 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:58:39.568180  501943 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:58:39.587787  501943 ssh_runner.go:195] Run: systemctl --version
	I1027 21:58:39.587848  501943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:58:39.605371  501943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:58:39.709373  501943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:58:39.709526  501943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:58:39.742458  501943 cri.go:89] found id: "7aad8afadd34f29e141ec5d2470a075be9bbdd8a58057f6902abc0edc66a36fe"
	I1027 21:58:39.742500  501943 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:58:39.742504  501943 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:58:39.742507  501943 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:58:39.742510  501943 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:58:39.742514  501943 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:58:39.742517  501943 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:58:39.742519  501943 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:58:39.742522  501943 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:58:39.742532  501943 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:58:39.742535  501943 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:58:39.742537  501943 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:58:39.742541  501943 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:58:39.742545  501943 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:58:39.742549  501943 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:58:39.742567  501943 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:58:39.742571  501943 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:58:39.742578  501943 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:58:39.742582  501943 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:58:39.742586  501943 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:58:39.742590  501943 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:58:39.742594  501943 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:58:39.742599  501943 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:58:39.742603  501943 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:58:39.742607  501943 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:58:39.742611  501943 cri.go:89] found id: ""
	I1027 21:58:39.742668  501943 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:58:39.758295  501943 out.go:203] 
	W1027 21:58:39.759522  501943 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:58:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:58:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:58:39.759560  501943 out.go:285] * 
	* 
	W1027 21:58:39.762767  501943 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:58:39.763978  501943 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable ingress --alsologtostderr -v=1: exit status 11 (256.590429ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:58:39.829612  502005 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:58:39.829956  502005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:58:39.829964  502005 out.go:374] Setting ErrFile to fd 2...
	I1027 21:58:39.829969  502005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:58:39.830179  502005 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:58:39.830505  502005 mustload.go:66] Loading cluster: addons-681393
	I1027 21:58:39.830857  502005 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:58:39.830875  502005 addons.go:606] checking whether the cluster is paused
	I1027 21:58:39.830970  502005 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:58:39.830982  502005 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:58:39.831403  502005 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:58:39.849633  502005 ssh_runner.go:195] Run: systemctl --version
	I1027 21:58:39.849693  502005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:58:39.868216  502005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:58:39.969245  502005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:58:39.969315  502005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:58:39.999496  502005 cri.go:89] found id: "7aad8afadd34f29e141ec5d2470a075be9bbdd8a58057f6902abc0edc66a36fe"
	I1027 21:58:39.999529  502005 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:58:39.999534  502005 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:58:39.999539  502005 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:58:39.999542  502005 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:58:39.999548  502005 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:58:39.999551  502005 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:58:39.999555  502005 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:58:39.999559  502005 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:58:39.999573  502005 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:58:39.999578  502005 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:58:39.999582  502005 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:58:39.999586  502005 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:58:39.999591  502005 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:58:39.999596  502005 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:58:39.999613  502005 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:58:39.999628  502005 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:58:39.999633  502005 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:58:39.999637  502005 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:58:39.999640  502005 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:58:39.999644  502005 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:58:39.999648  502005 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:58:39.999652  502005 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:58:39.999656  502005 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:58:39.999662  502005 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:58:39.999667  502005 cri.go:89] found id: ""
	I1027 21:58:39.999734  502005 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:58:40.015130  502005 out.go:203] 
	W1027 21:58:40.016236  502005 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:58:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:58:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:58:40.016262  502005 out.go:285] * 
	* 
	W1027 21:58:40.019463  502005 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:58:40.020762  502005 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-g4nwh" [4aed63d4-be1a-4058-aed6-e1d314e47a88] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003908578s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (256.625844ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:11.352619  496837 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:11.352726  496837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:11.352730  496837 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:11.352734  496837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:11.352960  496837 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:11.353292  496837 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:11.353702  496837 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:11.353719  496837 addons.go:606] checking whether the cluster is paused
	I1027 21:56:11.353801  496837 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:11.353814  496837 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:11.354239  496837 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:11.372730  496837 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:11.372786  496837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:11.391641  496837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:11.492876  496837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:11.492955  496837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:11.524733  496837 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:11.524764  496837 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:11.524767  496837 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:11.524771  496837 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:11.524774  496837 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:11.524778  496837 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:11.524780  496837 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:11.524783  496837 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:11.524786  496837 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:11.524796  496837 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:11.524800  496837 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:11.524804  496837 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:11.524808  496837 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:11.524812  496837 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:11.524816  496837 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:11.524838  496837 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:11.524849  496837 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:11.524854  496837 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:11.524856  496837 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:11.524858  496837 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:11.524864  496837 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:11.524866  496837 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:11.524869  496837 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:11.524871  496837 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:11.524874  496837 cri.go:89] found id: ""
	I1027 21:56:11.524939  496837 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:11.538911  496837 out.go:203] 
	W1027 21:56:11.539882  496837 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:11.539911  496837 out.go:285] * 
	* 
	W1027 21:56:11.543590  496837 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:11.544580  496837 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.805631ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-nkkls" [1c66ed47-adbe-4977-9533-1e61982c1a89] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0024053s
addons_test.go:463: (dbg) Run:  kubectl --context addons-681393 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (260.849204ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:11.418462  496860 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:11.418605  496860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:11.418616  496860 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:11.418622  496860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:11.418851  496860 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:11.419149  496860 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:11.419527  496860 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:11.419548  496860 addons.go:606] checking whether the cluster is paused
	I1027 21:56:11.419649  496860 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:11.419666  496860 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:11.420096  496860 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:11.439328  496860 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:11.439381  496860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:11.457447  496860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:11.559819  496860 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:11.559911  496860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:11.592187  496860 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:11.592212  496860 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:11.592216  496860 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:11.592225  496860 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:11.592228  496860 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:11.592232  496860 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:11.592234  496860 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:11.592237  496860 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:11.592240  496860 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:11.592246  496860 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:11.592248  496860 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:11.592251  496860 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:11.592253  496860 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:11.592256  496860 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:11.592258  496860 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:11.592263  496860 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:11.592266  496860 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:11.592270  496860 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:11.592272  496860 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:11.592274  496860 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:11.592280  496860 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:11.592282  496860 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:11.592284  496860 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:11.592287  496860 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:11.592289  496860 cri.go:89] found id: ""
	I1027 21:56:11.592331  496860 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:11.607612  496860 out.go:203] 
	W1027 21:56:11.608418  496860 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:11.608434  496860 out.go:285] * 
	* 
	W1027 21:56:11.611993  496860 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:11.612847  496860 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1027 21:56:03.606035  485668 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1027 21:56:03.610152  485668 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1027 21:56:03.610182  485668 kapi.go:107] duration metric: took 4.167458ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.182837ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-681393 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/10/27 21:56:15 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-681393 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [11c2f1cc-04a6-4e6a-81fa-f8dd646c8904] Pending
helpers_test.go:352: "task-pv-pod" [11c2f1cc-04a6-4e6a-81fa-f8dd646c8904] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [11c2f1cc-04a6-4e6a-81fa-f8dd646c8904] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.00364928s
addons_test.go:572: (dbg) Run:  kubectl --context addons-681393 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-681393 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-681393 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-681393 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-681393 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-681393 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-681393 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [719047e7-330e-46b1-96cf-42f60996ef54] Pending
helpers_test.go:352: "task-pv-pod-restore" [719047e7-330e-46b1-96cf-42f60996ef54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [719047e7-330e-46b1-96cf-42f60996ef54] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004018269s
addons_test.go:614: (dbg) Run:  kubectl --context addons-681393 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-681393 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-681393 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (252.762374ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:44.935775  499611 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:44.936083  499611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:44.936094  499611 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:44.936098  499611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:44.936313  499611 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:44.936594  499611 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:44.936972  499611 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:44.936994  499611 addons.go:606] checking whether the cluster is paused
	I1027 21:56:44.937099  499611 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:44.937113  499611 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:44.937512  499611 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:44.955055  499611 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:44.955114  499611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:44.972673  499611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:45.073025  499611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:45.073105  499611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:45.102869  499611 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:45.102901  499611 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:45.102907  499611 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:45.102911  499611 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:45.102915  499611 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:45.102920  499611 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:45.102923  499611 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:45.102928  499611 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:45.102931  499611 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:45.102968  499611 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:45.102973  499611 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:45.102977  499611 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:45.102981  499611 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:45.102985  499611 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:45.102990  499611 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:45.103009  499611 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:45.103021  499611 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:45.103028  499611 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:45.103032  499611 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:45.103036  499611 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:45.103039  499611 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:45.103043  499611 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:45.103046  499611 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:45.103051  499611 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:45.103056  499611 cri.go:89] found id: ""
	I1027 21:56:45.103119  499611 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:45.117616  499611 out.go:203] 
	W1027 21:56:45.118828  499611 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:45.118857  499611 out.go:285] * 
	* 
	W1027 21:56:45.121992  499611 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:45.123282  499611 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (256.3109ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:45.186449  499673 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:45.186614  499673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:45.186624  499673 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:45.186629  499673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:45.186887  499673 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:45.187230  499673 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:45.187593  499673 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:45.187612  499673 addons.go:606] checking whether the cluster is paused
	I1027 21:56:45.187719  499673 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:45.187737  499673 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:45.188277  499673 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:45.206566  499673 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:45.206627  499673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:45.224590  499673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:45.326357  499673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:45.326453  499673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:45.359432  499673 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:45.359454  499673 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:45.359458  499673 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:45.359461  499673 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:45.359463  499673 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:45.359467  499673 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:45.359469  499673 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:45.359472  499673 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:45.359475  499673 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:45.359482  499673 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:45.359486  499673 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:45.359490  499673 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:45.359494  499673 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:45.359498  499673 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:45.359502  499673 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:45.359515  499673 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:45.359522  499673 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:45.359529  499673 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:45.359533  499673 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:45.359535  499673 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:45.359541  499673 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:45.359543  499673 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:45.359545  499673 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:45.359548  499673 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:45.359552  499673 cri.go:89] found id: ""
	I1027 21:56:45.359600  499673 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:45.374505  499673 out.go:203] 
	W1027 21:56:45.375643  499673 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:45.375671  499673 out.go:285] * 
	* 
	W1027 21:56:45.378745  499673 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:45.379914  499673 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (41.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-681393 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-681393 --alsologtostderr -v=1: exit status 11 (254.28979ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:01.066543  495740 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:01.066862  495740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:01.066873  495740 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:01.066877  495740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:01.067094  495740 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:01.067392  495740 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:01.067751  495740 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:01.067767  495740 addons.go:606] checking whether the cluster is paused
	I1027 21:56:01.067844  495740 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:01.067856  495740 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:01.068307  495740 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:01.085774  495740 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:01.085831  495740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:01.103259  495740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:01.203083  495740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:01.203165  495740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:01.235364  495740 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:01.235390  495740 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:01.235396  495740 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:01.235401  495740 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:01.235407  495740 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:01.235412  495740 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:01.235416  495740 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:01.235420  495740 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:01.235424  495740 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:01.235433  495740 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:01.235437  495740 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:01.235441  495740 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:01.235445  495740 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:01.235449  495740 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:01.235454  495740 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:01.235461  495740 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:01.235472  495740 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:01.235477  495740 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:01.235481  495740 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:01.235485  495740 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:01.235489  495740 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:01.235494  495740 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:01.235498  495740 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:01.235506  495740 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:01.235510  495740 cri.go:89] found id: ""
	I1027 21:56:01.235562  495740 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:01.251243  495740 out.go:203] 
	W1027 21:56:01.252359  495740 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:01.252399  495740 out.go:285] * 
	* 
	W1027 21:56:01.255564  495740 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:01.256585  495740 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-681393 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-681393
helpers_test.go:243: (dbg) docker inspect addons-681393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf",
	        "Created": "2025-10-27T21:54:04.757367764Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487715,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T21:54:04.799404683Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf/hosts",
	        "LogPath": "/var/lib/docker/containers/e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf/e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf-json.log",
	        "Name": "/addons-681393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-681393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-681393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e928af592deab6f152229c64674a05e588181263459b4d1f6d80e8e948d318cf",
	                "LowerDir": "/var/lib/docker/overlay2/1f4986ab18921d5246d6778dac952103499ca88f791dcc633d95e9290302ca5f-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f4986ab18921d5246d6778dac952103499ca88f791dcc633d95e9290302ca5f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f4986ab18921d5246d6778dac952103499ca88f791dcc633d95e9290302ca5f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f4986ab18921d5246d6778dac952103499ca88f791dcc633d95e9290302ca5f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-681393",
	                "Source": "/var/lib/docker/volumes/addons-681393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-681393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-681393",
	                "name.minikube.sigs.k8s.io": "addons-681393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c02ce2472d6257ac9f1957ac5281b69604aa81edb772640a048ad5ed15e6200",
	            "SandboxKey": "/var/run/docker/netns/3c02ce2472d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-681393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:42:9b:53:13:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1f9929fc55781ac7dc66eb58190d50c60f897b144595a3fb0395ed718c198aa9",
	                    "EndpointID": "e5b32d746f785464947254505a9da99c8daf04ffacc6aff6d1d32a23c1c533e4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-681393",
	                        "e928af592dea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-681393 -n addons-681393
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-681393 logs -n 25: (1.156888161s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-503153 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-503153   │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ delete  │ -p download-only-503153                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-503153   │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ start   │ -o=json --download-only -p download-only-844553 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-844553   │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ delete  │ -p download-only-844553                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-844553   │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ delete  │ -p download-only-503153                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-503153   │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ delete  │ -p download-only-844553                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-844553   │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ start   │ --download-only -p download-docker-726727 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-726727 │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	│ delete  │ -p download-docker-726727                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-726727 │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ start   │ --download-only -p binary-mirror-240698 --alsologtostderr --binary-mirror http://127.0.0.1:35931 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-240698   │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	│ delete  │ -p binary-mirror-240698                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-240698   │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ addons  │ disable dashboard -p addons-681393                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-681393          │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	│ addons  │ enable dashboard -p addons-681393                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-681393          │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	│ start   │ -p addons-681393 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-681393          │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:55 UTC │
	│ addons  │ addons-681393 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-681393          │ jenkins │ v1.37.0 │ 27 Oct 25 21:55 UTC │                     │
	│ addons  │ addons-681393 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-681393          │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	│ addons  │ enable headlamp -p addons-681393 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-681393          │ jenkins │ v1.37.0 │ 27 Oct 25 21:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 21:53:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 21:53:44.196138  487076 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:53:44.196413  487076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:53:44.196423  487076 out.go:374] Setting ErrFile to fd 2...
	I1027 21:53:44.196428  487076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:53:44.196697  487076 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:53:44.197331  487076 out.go:368] Setting JSON to false
	I1027 21:53:44.198586  487076 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5763,"bootTime":1761596261,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 21:53:44.198715  487076 start.go:143] virtualization: kvm guest
	I1027 21:53:44.200288  487076 out.go:179] * [addons-681393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 21:53:44.201585  487076 notify.go:221] Checking for updates...
	I1027 21:53:44.201592  487076 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 21:53:44.202558  487076 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 21:53:44.203530  487076 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 21:53:44.204426  487076 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 21:53:44.205329  487076 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 21:53:44.206250  487076 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 21:53:44.207356  487076 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 21:53:44.230412  487076 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 21:53:44.230499  487076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 21:53:44.287750  487076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-27 21:53:44.278034178 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 21:53:44.287861  487076 docker.go:318] overlay module found
	I1027 21:53:44.289313  487076 out.go:179] * Using the docker driver based on user configuration
	I1027 21:53:44.290208  487076 start.go:307] selected driver: docker
	I1027 21:53:44.290225  487076 start.go:928] validating driver "docker" against <nil>
	I1027 21:53:44.290248  487076 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 21:53:44.290815  487076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 21:53:44.351289  487076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:47 SystemTime:2025-10-27 21:53:44.340980418 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 21:53:44.351459  487076 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 21:53:44.351673  487076 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 21:53:44.352915  487076 out.go:179] * Using Docker driver with root privileges
	I1027 21:53:44.353821  487076 cni.go:84] Creating CNI manager for ""
	I1027 21:53:44.353892  487076 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 21:53:44.353904  487076 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 21:53:44.354005  487076 start.go:351] cluster config:
	{Name:addons-681393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-681393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1027 21:53:44.355071  487076 out.go:179] * Starting "addons-681393" primary control-plane node in "addons-681393" cluster
	I1027 21:53:44.355972  487076 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 21:53:44.356858  487076 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 21:53:44.357681  487076 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 21:53:44.357711  487076 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 21:53:44.357717  487076 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 21:53:44.357817  487076 cache.go:59] Caching tarball of preloaded images
	I1027 21:53:44.357913  487076 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 21:53:44.357924  487076 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 21:53:44.358285  487076 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/config.json ...
	I1027 21:53:44.358314  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/config.json: {Name:mkeb388ab1ce30b216f0956f96929fe834e2e844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:53:44.373641  487076 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 21:53:44.373748  487076 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 21:53:44.373765  487076 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 21:53:44.373769  487076 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 21:53:44.373780  487076 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 21:53:44.373787  487076 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1027 21:53:57.410864  487076 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1027 21:53:57.410907  487076 cache.go:233] Successfully downloaded all kic artifacts
	I1027 21:53:57.410957  487076 start.go:360] acquireMachinesLock for addons-681393: {Name:mka31f444ade0febfee0aa58b30475f233a1624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 21:53:57.411091  487076 start.go:364] duration metric: took 104.073µs to acquireMachinesLock for "addons-681393"
	I1027 21:53:57.411135  487076 start.go:93] Provisioning new machine with config: &{Name:addons-681393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-681393 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 21:53:57.411201  487076 start.go:125] createHost starting for "" (driver="docker")
	I1027 21:53:57.412647  487076 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1027 21:53:57.412901  487076 start.go:159] libmachine.API.Create for "addons-681393" (driver="docker")
	I1027 21:53:57.412936  487076 client.go:173] LocalClient.Create starting
	I1027 21:53:57.413053  487076 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 21:53:57.547980  487076 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 21:53:57.602736  487076 cli_runner.go:164] Run: docker network inspect addons-681393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 21:53:57.620075  487076 cli_runner.go:211] docker network inspect addons-681393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 21:53:57.620152  487076 network_create.go:284] running [docker network inspect addons-681393] to gather additional debugging logs...
	I1027 21:53:57.620176  487076 cli_runner.go:164] Run: docker network inspect addons-681393
	W1027 21:53:57.635848  487076 cli_runner.go:211] docker network inspect addons-681393 returned with exit code 1
	I1027 21:53:57.635879  487076 network_create.go:287] error running [docker network inspect addons-681393]: docker network inspect addons-681393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-681393 not found
	I1027 21:53:57.635905  487076 network_create.go:289] output of [docker network inspect addons-681393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-681393 not found
	
	** /stderr **
	I1027 21:53:57.636036  487076 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 21:53:57.651582  487076 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001616cc0}
	I1027 21:53:57.651622  487076 network_create.go:124] attempt to create docker network addons-681393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1027 21:53:57.651681  487076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-681393 addons-681393
	I1027 21:53:57.709008  487076 network_create.go:108] docker network addons-681393 192.168.49.0/24 created
	I1027 21:53:57.709041  487076 kic.go:121] calculated static IP "192.168.49.2" for the "addons-681393" container
	I1027 21:53:57.709212  487076 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 21:53:57.724843  487076 cli_runner.go:164] Run: docker volume create addons-681393 --label name.minikube.sigs.k8s.io=addons-681393 --label created_by.minikube.sigs.k8s.io=true
	I1027 21:53:57.742196  487076 oci.go:103] Successfully created a docker volume addons-681393
	I1027 21:53:57.742304  487076 cli_runner.go:164] Run: docker run --rm --name addons-681393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-681393 --entrypoint /usr/bin/test -v addons-681393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 21:54:00.277070  487076 cli_runner.go:217] Completed: docker run --rm --name addons-681393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-681393 --entrypoint /usr/bin/test -v addons-681393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.534718282s)
	I1027 21:54:00.277102  487076 oci.go:107] Successfully prepared a docker volume addons-681393
	I1027 21:54:00.277153  487076 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 21:54:00.277179  487076 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 21:54:00.277250  487076 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-681393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 21:54:04.683664  487076 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-681393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.406360153s)
	I1027 21:54:04.683700  487076 kic.go:203] duration metric: took 4.406516454s to extract preloaded images to volume ...
	W1027 21:54:04.683806  487076 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 21:54:04.683874  487076 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 21:54:04.683927  487076 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 21:54:04.741110  487076 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-681393 --name addons-681393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-681393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-681393 --network addons-681393 --ip 192.168.49.2 --volume addons-681393:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 21:54:05.033395  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Running}}
	I1027 21:54:05.052211  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:05.070829  487076 cli_runner.go:164] Run: docker exec addons-681393 stat /var/lib/dpkg/alternatives/iptables
	I1027 21:54:05.121723  487076 oci.go:144] the created container "addons-681393" has a running status.
	I1027 21:54:05.121791  487076 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa...
	I1027 21:54:05.441384  487076 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 21:54:05.467223  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:05.486829  487076 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 21:54:05.486849  487076 kic_runner.go:114] Args: [docker exec --privileged addons-681393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 21:54:05.530990  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:05.548580  487076 machine.go:94] provisionDockerMachine start ...
	I1027 21:54:05.548695  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:05.567072  487076 main.go:143] libmachine: Using SSH client type: native
	I1027 21:54:05.567417  487076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1027 21:54:05.567433  487076 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 21:54:05.708166  487076 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-681393
	
	I1027 21:54:05.708208  487076 ubuntu.go:182] provisioning hostname "addons-681393"
	I1027 21:54:05.708317  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:05.726628  487076 main.go:143] libmachine: Using SSH client type: native
	I1027 21:54:05.726852  487076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1027 21:54:05.726866  487076 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-681393 && echo "addons-681393" | sudo tee /etc/hostname
	I1027 21:54:05.876717  487076 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-681393
	
	I1027 21:54:05.876798  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:05.893775  487076 main.go:143] libmachine: Using SSH client type: native
	I1027 21:54:05.894032  487076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1027 21:54:05.894050  487076 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-681393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-681393/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-681393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 21:54:06.034846  487076 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 21:54:06.034879  487076 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 21:54:06.034910  487076 ubuntu.go:190] setting up certificates
	I1027 21:54:06.034936  487076 provision.go:84] configureAuth start
	I1027 21:54:06.035005  487076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-681393
	I1027 21:54:06.053178  487076 provision.go:143] copyHostCerts
	I1027 21:54:06.053279  487076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 21:54:06.053445  487076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 21:54:06.053572  487076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 21:54:06.053665  487076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.addons-681393 san=[127.0.0.1 192.168.49.2 addons-681393 localhost minikube]
	I1027 21:54:06.495624  487076 provision.go:177] copyRemoteCerts
	I1027 21:54:06.495693  487076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 21:54:06.495746  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:06.513370  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:06.614696  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 21:54:06.635122  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 21:54:06.654062  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 21:54:06.672926  487076 provision.go:87] duration metric: took 637.959139ms to configureAuth
	I1027 21:54:06.672980  487076 ubuntu.go:206] setting minikube options for container-runtime
	I1027 21:54:06.673183  487076 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:54:06.673300  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:06.691147  487076 main.go:143] libmachine: Using SSH client type: native
	I1027 21:54:06.691379  487076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1027 21:54:06.691404  487076 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 21:54:06.942756  487076 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 21:54:06.942786  487076 machine.go:97] duration metric: took 1.394182586s to provisionDockerMachine
	I1027 21:54:06.942801  487076 client.go:176] duration metric: took 9.529840427s to LocalClient.Create
	I1027 21:54:06.942821  487076 start.go:167] duration metric: took 9.529921339s to libmachine.API.Create "addons-681393"
	I1027 21:54:06.942831  487076 start.go:293] postStartSetup for "addons-681393" (driver="docker")
	I1027 21:54:06.942844  487076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 21:54:06.942920  487076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 21:54:06.943000  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:06.960764  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:07.062936  487076 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 21:54:07.066417  487076 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 21:54:07.066446  487076 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 21:54:07.066459  487076 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 21:54:07.066529  487076 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 21:54:07.066557  487076 start.go:296] duration metric: took 123.719178ms for postStartSetup
	I1027 21:54:07.066849  487076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-681393
	I1027 21:54:07.085290  487076 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/config.json ...
	I1027 21:54:07.085554  487076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 21:54:07.085597  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:07.102039  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:07.199332  487076 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 21:54:07.204131  487076 start.go:128] duration metric: took 9.79291352s to createHost
	I1027 21:54:07.204156  487076 start.go:83] releasing machines lock for "addons-681393", held for 9.793051774s
	I1027 21:54:07.204224  487076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-681393
	I1027 21:54:07.220843  487076 ssh_runner.go:195] Run: cat /version.json
	I1027 21:54:07.220887  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:07.220935  487076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 21:54:07.221028  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:07.238116  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:07.238553  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:07.387851  487076 ssh_runner.go:195] Run: systemctl --version
	I1027 21:54:07.394478  487076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 21:54:07.429677  487076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 21:54:07.434513  487076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 21:54:07.434571  487076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 21:54:07.460129  487076 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 21:54:07.460160  487076 start.go:496] detecting cgroup driver to use...
	I1027 21:54:07.460199  487076 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 21:54:07.460257  487076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 21:54:07.476354  487076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 21:54:07.488734  487076 docker.go:218] disabling cri-docker service (if available) ...
	I1027 21:54:07.488796  487076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 21:54:07.504774  487076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 21:54:07.522817  487076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 21:54:07.604458  487076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 21:54:07.691886  487076 docker.go:234] disabling docker service ...
	I1027 21:54:07.691977  487076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 21:54:07.711186  487076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 21:54:07.723987  487076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 21:54:07.801751  487076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 21:54:07.884336  487076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 21:54:07.896841  487076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 21:54:07.910746  487076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 21:54:07.910812  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.920816  487076 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 21:54:07.920880  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.929810  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.938445  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.947250  487076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 21:54:07.955237  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.963729  487076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.976935  487076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 21:54:07.985425  487076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 21:54:07.992454  487076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 21:54:07.999570  487076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 21:54:08.076718  487076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 21:54:08.181400  487076 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 21:54:08.181478  487076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 21:54:08.185925  487076 start.go:564] Will wait 60s for crictl version
	I1027 21:54:08.186001  487076 ssh_runner.go:195] Run: which crictl
	I1027 21:54:08.189580  487076 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 21:54:08.215607  487076 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 21:54:08.215689  487076 ssh_runner.go:195] Run: crio --version
	I1027 21:54:08.243925  487076 ssh_runner.go:195] Run: crio --version
	I1027 21:54:08.273531  487076 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 21:54:08.274718  487076 cli_runner.go:164] Run: docker network inspect addons-681393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 21:54:08.292050  487076 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1027 21:54:08.296345  487076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 21:54:08.306680  487076 kubeadm.go:884] updating cluster {Name:addons-681393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-681393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 21:54:08.306792  487076 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 21:54:08.306837  487076 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 21:54:08.339004  487076 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 21:54:08.339028  487076 crio.go:433] Images already preloaded, skipping extraction
	I1027 21:54:08.339082  487076 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 21:54:08.366590  487076 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 21:54:08.366617  487076 cache_images.go:86] Images are preloaded, skipping loading
	I1027 21:54:08.366625  487076 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1027 21:54:08.366736  487076 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-681393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-681393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 21:54:08.366803  487076 ssh_runner.go:195] Run: crio config
	I1027 21:54:08.413816  487076 cni.go:84] Creating CNI manager for ""
	I1027 21:54:08.413837  487076 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 21:54:08.413857  487076 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 21:54:08.413882  487076 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-681393 NodeName:addons-681393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 21:54:08.414020  487076 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-681393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 21:54:08.414099  487076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 21:54:08.422593  487076 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 21:54:08.422687  487076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 21:54:08.430523  487076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1027 21:54:08.442938  487076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 21:54:08.457924  487076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1027 21:54:08.470799  487076 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1027 21:54:08.474559  487076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 21:54:08.484283  487076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 21:54:08.559619  487076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 21:54:08.580605  487076 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393 for IP: 192.168.49.2
	I1027 21:54:08.580631  487076 certs.go:195] generating shared ca certs ...
	I1027 21:54:08.580647  487076 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:08.580798  487076 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 21:54:08.762609  487076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt ...
	I1027 21:54:08.762642  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt: {Name:mk6bcc704cee40f583b2e9c7ae9ea195abf7214d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:08.762839  487076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key ...
	I1027 21:54:08.762850  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key: {Name:mk7a7b8deca77163260202e72f732a394a4db049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:08.762927  487076 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 21:54:09.146118  487076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt ...
	I1027 21:54:09.146155  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt: {Name:mk4b436ce6a95536f63be1ea5da174a20f4ac530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.146342  487076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key ...
	I1027 21:54:09.146353  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key: {Name:mk4847455581689491a6bf7b9ac6f36470c32a26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.146425  487076 certs.go:257] generating profile certs ...
	I1027 21:54:09.146502  487076 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.key
	I1027 21:54:09.146517  487076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt with IP's: []
	I1027 21:54:09.619928  487076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt ...
	I1027 21:54:09.619969  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: {Name:mka12e803ffb734b9b8fbd52c50d7f8ff1b3b48a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.620195  487076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.key ...
	I1027 21:54:09.620212  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.key: {Name:mk262048845205be7a32e300b1501d8a59098073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.620324  487076 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key.57447788
	I1027 21:54:09.620355  487076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt.57447788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1027 21:54:09.842107  487076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt.57447788 ...
	I1027 21:54:09.842143  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt.57447788: {Name:mka2591bb91c57245fbeb03b480901a5062a0ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.842373  487076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key.57447788 ...
	I1027 21:54:09.842393  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key.57447788: {Name:mk08dc4390b209ab64acba576028fc77cb955e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.842505  487076 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt.57447788 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt
	I1027 21:54:09.842613  487076 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key.57447788 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key
	I1027 21:54:09.842686  487076 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.key
	I1027 21:54:09.842723  487076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.crt with IP's: []
	I1027 21:54:09.983297  487076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.crt ...
	I1027 21:54:09.983329  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.crt: {Name:mkdb2c2420ba72ff809e68c2c013664c4764445c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.983545  487076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.key ...
	I1027 21:54:09.983566  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.key: {Name:mk4507bac571aa59f6c90fe6f0a21dd5e9ccdb08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:09.983816  487076 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 21:54:09.983861  487076 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 21:54:09.983898  487076 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 21:54:09.983929  487076 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 21:54:09.984647  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 21:54:10.003724  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 21:54:10.022067  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 21:54:10.040346  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 21:54:10.058132  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 21:54:10.076152  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 21:54:10.094505  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 21:54:10.112335  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 21:54:10.129913  487076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 21:54:10.149341  487076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 21:54:10.162528  487076 ssh_runner.go:195] Run: openssl version
	I1027 21:54:10.169245  487076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 21:54:10.179968  487076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 21:54:10.184117  487076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 21:54:10.184201  487076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 21:54:10.222598  487076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 21:54:10.231733  487076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 21:54:10.235415  487076 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 21:54:10.235460  487076 kubeadm.go:401] StartCluster: {Name:addons-681393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-681393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 21:54:10.235559  487076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:54:10.235629  487076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:54:10.264489  487076 cri.go:89] found id: ""
	I1027 21:54:10.264550  487076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 21:54:10.273177  487076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 21:54:10.281356  487076 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 21:54:10.281419  487076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 21:54:10.289299  487076 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 21:54:10.289316  487076 kubeadm.go:158] found existing configuration files:
	
	I1027 21:54:10.289354  487076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 21:54:10.296990  487076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 21:54:10.297041  487076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 21:54:10.304337  487076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 21:54:10.311839  487076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 21:54:10.311902  487076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 21:54:10.319098  487076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 21:54:10.326854  487076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 21:54:10.326913  487076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 21:54:10.334712  487076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 21:54:10.342519  487076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 21:54:10.342589  487076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 21:54:10.350034  487076 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 21:54:10.390857  487076 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 21:54:10.390937  487076 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 21:54:10.412642  487076 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 21:54:10.412725  487076 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 21:54:10.412765  487076 kubeadm.go:319] OS: Linux
	I1027 21:54:10.412825  487076 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 21:54:10.412885  487076 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 21:54:10.412938  487076 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 21:54:10.413009  487076 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 21:54:10.413065  487076 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 21:54:10.413129  487076 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 21:54:10.413186  487076 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 21:54:10.413251  487076 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 21:54:10.476095  487076 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 21:54:10.476238  487076 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 21:54:10.476368  487076 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 21:54:10.485278  487076 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 21:54:10.487060  487076 out.go:252]   - Generating certificates and keys ...
	I1027 21:54:10.487171  487076 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 21:54:10.487283  487076 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 21:54:10.707292  487076 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 21:54:11.105894  487076 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 21:54:11.947310  487076 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 21:54:12.446295  487076 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 21:54:12.584355  487076 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 21:54:12.584483  487076 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-681393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 21:54:12.734689  487076 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 21:54:12.734886  487076 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-681393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 21:54:12.966708  487076 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 21:54:13.527144  487076 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 21:54:13.667071  487076 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 21:54:13.667177  487076 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 21:54:14.367159  487076 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 21:54:14.579565  487076 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 21:54:14.762325  487076 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 21:54:14.967994  487076 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 21:54:15.054403  487076 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 21:54:15.054898  487076 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 21:54:15.058599  487076 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 21:54:15.061878  487076 out.go:252]   - Booting up control plane ...
	I1027 21:54:15.062017  487076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 21:54:15.062117  487076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 21:54:15.062224  487076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 21:54:15.075075  487076 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 21:54:15.075183  487076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 21:54:15.081890  487076 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 21:54:15.082120  487076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 21:54:15.082179  487076 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 21:54:15.177211  487076 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 21:54:15.177336  487076 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 21:54:15.679093  487076 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.980126ms
	I1027 21:54:15.682886  487076 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 21:54:15.683029  487076 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1027 21:54:15.683173  487076 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 21:54:15.683291  487076 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 21:54:17.675020  487076 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.991947425s
	I1027 21:54:17.783162  487076 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.100155054s
	I1027 21:54:19.185532  487076 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.50257414s
	I1027 21:54:19.197521  487076 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 21:54:19.209258  487076 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 21:54:19.219808  487076 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 21:54:19.220135  487076 kubeadm.go:319] [mark-control-plane] Marking the node addons-681393 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 21:54:19.229138  487076 kubeadm.go:319] [bootstrap-token] Using token: ztjz0y.5i3bg84f6s7j3keq
	I1027 21:54:19.230540  487076 out.go:252]   - Configuring RBAC rules ...
	I1027 21:54:19.230698  487076 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 21:54:19.234720  487076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 21:54:19.240750  487076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 21:54:19.243534  487076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 21:54:19.247436  487076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 21:54:19.250113  487076 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 21:54:19.592659  487076 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 21:54:20.007211  487076 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 21:54:20.591373  487076 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 21:54:20.594292  487076 kubeadm.go:319] 
	I1027 21:54:20.594403  487076 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 21:54:20.594415  487076 kubeadm.go:319] 
	I1027 21:54:20.594523  487076 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 21:54:20.594533  487076 kubeadm.go:319] 
	I1027 21:54:20.594582  487076 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 21:54:20.594688  487076 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 21:54:20.594762  487076 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 21:54:20.594780  487076 kubeadm.go:319] 
	I1027 21:54:20.594852  487076 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 21:54:20.594862  487076 kubeadm.go:319] 
	I1027 21:54:20.594936  487076 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 21:54:20.594978  487076 kubeadm.go:319] 
	I1027 21:54:20.595069  487076 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 21:54:20.595177  487076 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 21:54:20.595277  487076 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 21:54:20.595287  487076 kubeadm.go:319] 
	I1027 21:54:20.595388  487076 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 21:54:20.595474  487076 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 21:54:20.595487  487076 kubeadm.go:319] 
	I1027 21:54:20.595587  487076 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ztjz0y.5i3bg84f6s7j3keq \
	I1027 21:54:20.595708  487076 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d \
	I1027 21:54:20.595756  487076 kubeadm.go:319] 	--control-plane 
	I1027 21:54:20.595784  487076 kubeadm.go:319] 
	I1027 21:54:20.595906  487076 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 21:54:20.595916  487076 kubeadm.go:319] 
	I1027 21:54:20.596038  487076 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ztjz0y.5i3bg84f6s7j3keq \
	I1027 21:54:20.596188  487076 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d 
	I1027 21:54:20.598618  487076 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 21:54:20.598728  487076 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 21:54:20.598759  487076 cni.go:84] Creating CNI manager for ""
	I1027 21:54:20.598770  487076 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 21:54:20.600166  487076 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 21:54:20.601063  487076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 21:54:20.605501  487076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 21:54:20.605518  487076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 21:54:20.619118  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 21:54:20.834581  487076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 21:54:20.834666  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:20.834711  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-681393 minikube.k8s.io/updated_at=2025_10_27T21_54_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=addons-681393 minikube.k8s.io/primary=true
	I1027 21:54:20.930240  487076 ops.go:34] apiserver oom_adj: -16
	I1027 21:54:20.930257  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:21.431033  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:21.930779  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:22.431204  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:22.931299  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:23.430479  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:23.930413  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:24.430491  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:24.931047  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:25.430924  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:25.930836  487076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 21:54:25.998100  487076 kubeadm.go:1114] duration metric: took 5.163502606s to wait for elevateKubeSystemPrivileges
	I1027 21:54:25.998141  487076 kubeadm.go:403] duration metric: took 15.762683617s to StartCluster
	I1027 21:54:25.998167  487076 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:25.998290  487076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 21:54:25.998867  487076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 21:54:25.999147  487076 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 21:54:25.999192  487076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 21:54:25.999210  487076 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 21:54:25.999473  487076 addons.go:69] Setting yakd=true in profile "addons-681393"
	I1027 21:54:25.999516  487076 addons.go:238] Setting addon yakd=true in "addons-681393"
	I1027 21:54:25.999555  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:25.999574  487076 addons.go:69] Setting inspektor-gadget=true in profile "addons-681393"
	I1027 21:54:25.999616  487076 addons.go:238] Setting addon inspektor-gadget=true in "addons-681393"
	I1027 21:54:25.999670  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:25.999741  487076 addons.go:69] Setting metrics-server=true in profile "addons-681393"
	I1027 21:54:25.999768  487076 addons.go:238] Setting addon metrics-server=true in "addons-681393"
	I1027 21:54:25.999796  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:25.999990  487076 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-681393"
	I1027 21:54:25.999998  487076 addons.go:69] Setting storage-provisioner=true in profile "addons-681393"
	I1027 21:54:26.000026  487076 addons.go:238] Setting addon storage-provisioner=true in "addons-681393"
	I1027 21:54:26.000065  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.000079  487076 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-681393"
	I1027 21:54:26.000102  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.000183  487076 addons.go:69] Setting gcp-auth=true in profile "addons-681393"
	I1027 21:54:26.000215  487076 mustload.go:66] Loading cluster: addons-681393
	I1027 21:54:26.000276  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.000357  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.000744  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.001237  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.001730  487076 addons.go:69] Setting default-storageclass=true in profile "addons-681393"
	I1027 21:54:26.001761  487076 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-681393"
	I1027 21:54:26.002080  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.002291  487076 out.go:179] * Verifying Kubernetes components...
	I1027 21:54:26.002511  487076 addons.go:69] Setting volcano=true in profile "addons-681393"
	I1027 21:54:26.002532  487076 addons.go:238] Setting addon volcano=true in "addons-681393"
	I1027 21:54:26.002563  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.002638  487076 addons.go:69] Setting ingress=true in profile "addons-681393"
	I1027 21:54:26.002651  487076 addons.go:238] Setting addon ingress=true in "addons-681393"
	I1027 21:54:26.002682  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.002693  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.003296  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.003624  487076 addons.go:69] Setting volumesnapshots=true in profile "addons-681393"
	I1027 21:54:26.003651  487076 addons.go:238] Setting addon volumesnapshots=true in "addons-681393"
	I1027 21:54:26.003682  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.003893  487076 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:54:26.004143  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.004155  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.004548  487076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 21:54:26.004750  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:25.999403  487076 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:54:26.004880  487076 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-681393"
	I1027 21:54:26.004906  487076 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-681393"
	I1027 21:54:26.005071  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.005493  487076 addons.go:69] Setting ingress-dns=true in profile "addons-681393"
	I1027 21:54:26.005513  487076 addons.go:238] Setting addon ingress-dns=true in "addons-681393"
	I1027 21:54:26.005531  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.005545  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.005578  487076 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-681393"
	I1027 21:54:26.005604  487076 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-681393"
	I1027 21:54:26.006035  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.006403  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.008083  487076 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-681393"
	I1027 21:54:26.008118  487076 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-681393"
	I1027 21:54:26.008155  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.008629  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.009107  487076 addons.go:69] Setting cloud-spanner=true in profile "addons-681393"
	I1027 21:54:26.009133  487076 addons.go:238] Setting addon cloud-spanner=true in "addons-681393"
	I1027 21:54:26.009163  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.012773  487076 addons.go:69] Setting registry=true in profile "addons-681393"
	I1027 21:54:26.013052  487076 addons.go:238] Setting addon registry=true in "addons-681393"
	I1027 21:54:26.013217  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.014689  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.014873  487076 addons.go:69] Setting registry-creds=true in profile "addons-681393"
	I1027 21:54:26.015776  487076 addons.go:238] Setting addon registry-creds=true in "addons-681393"
	I1027 21:54:26.015810  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.017500  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.018572  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.051816  487076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 21:54:26.054028  487076 addons.go:238] Setting addon default-storageclass=true in "addons-681393"
	I1027 21:54:26.054079  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.054573  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	W1027 21:54:26.056063  487076 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 21:54:26.056076  487076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 21:54:26.057215  487076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 21:54:26.058587  487076 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 21:54:26.058608  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 21:54:26.058665  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.090496  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 21:54:26.097478  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 21:54:26.097518  487076 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 21:54:26.097592  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.097616  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.101923  487076 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 21:54:26.101978  487076 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 21:54:26.104546  487076 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 21:54:26.104645  487076 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 21:54:26.104659  487076 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 21:54:26.104807  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.106276  487076 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 21:54:26.106296  487076 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 21:54:26.106357  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.107841  487076 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 21:54:26.107866  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 21:54:26.107917  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.114505  487076 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 21:54:26.115738  487076 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1027 21:54:26.116291  487076 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 21:54:26.116934  487076 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 21:54:26.116965  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 21:54:26.117052  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.117490  487076 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 21:54:26.117513  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 21:54:26.117569  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.119455  487076 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 21:54:26.119472  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 21:54:26.119528  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.123791  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 21:54:26.124787  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 21:54:26.125851  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 21:54:26.131940  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 21:54:26.133970  487076 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 21:54:26.134889  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 21:54:26.136129  487076 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 21:54:26.136147  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 21:54:26.136214  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.137514  487076 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 21:54:26.137532  487076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 21:54:26.137586  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.139202  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 21:54:26.140136  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 21:54:26.141042  487076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 21:54:26.141848  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 21:54:26.141864  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 21:54:26.141933  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.146888  487076 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 21:54:26.147051  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.147909  487076 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 21:54:26.147929  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 21:54:26.148004  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.155080  487076 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 21:54:26.159694  487076 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 21:54:26.160634  487076 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 21:54:26.160654  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 21:54:26.160737  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.164655  487076 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-681393"
	I1027 21:54:26.164773  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:26.165343  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:26.167652  487076 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 21:54:26.168524  487076 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 21:54:26.168548  487076 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 21:54:26.168617  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:26.170355  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.175618  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.186604  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.192570  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.195146  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.206205  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.219706  487076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 21:54:26.223354  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.224572  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.224971  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.225636  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.226045  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.226494  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.226543  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.228030  487076 out.go:179]   - Using image docker.io/busybox:stable
	I1027 21:54:26.229057  487076 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W1027 21:54:26.229606  487076 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 21:54:26.229832  487076 retry.go:31] will retry after 351.996233ms: ssh: handshake failed: EOF
	I1027 21:54:26.230124  487076 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 21:54:26.230142  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 21:54:26.230199  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	W1027 21:54:26.230405  487076 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 21:54:26.230421  487076 retry.go:31] will retry after 203.89578ms: ssh: handshake failed: EOF
	I1027 21:54:26.244859  487076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 21:54:26.268279  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:26.305619  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 21:54:26.349470  487076 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 21:54:26.349677  487076 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 21:54:26.359778  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 21:54:26.362183  487076 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 21:54:26.362269  487076 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 21:54:26.369114  487076 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 21:54:26.369197  487076 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 21:54:26.386414  487076 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 21:54:26.386439  487076 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 21:54:26.396188  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 21:54:26.396485  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 21:54:26.398751  487076 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:26.398771  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 21:54:26.399116  487076 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 21:54:26.399132  487076 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 21:54:26.410562  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 21:54:26.411036  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 21:54:26.412599  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 21:54:26.416251  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 21:54:26.426661  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 21:54:26.427263  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 21:54:26.431609  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 21:54:26.440733  487076 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 21:54:26.440830  487076 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 21:54:26.441548  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:26.449414  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 21:54:26.449528  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 21:54:26.452167  487076 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 21:54:26.452271  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 21:54:26.482155  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 21:54:26.482256  487076 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 21:54:26.497932  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 21:54:26.502379  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 21:54:26.502422  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 21:54:26.558572  487076 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 21:54:26.558596  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 21:54:26.562273  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 21:54:26.562298  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 21:54:26.614247  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 21:54:26.621841  487076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 21:54:26.621872  487076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 21:54:26.666686  487076 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1027 21:54:26.668789  487076 node_ready.go:35] waiting up to 6m0s for node "addons-681393" to be "Ready" ...
	I1027 21:54:26.678996  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 21:54:26.679087  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 21:54:26.703739  487076 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 21:54:26.703824  487076 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 21:54:26.736559  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 21:54:26.736670  487076 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 21:54:26.818543  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 21:54:26.818651  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 21:54:26.826759  487076 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 21:54:26.826786  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 21:54:26.879083  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 21:54:26.884753  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 21:54:26.884866  487076 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 21:54:26.884978  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 21:54:26.885065  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 21:54:26.917299  487076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 21:54:26.917327  487076 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 21:54:26.939192  487076 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 21:54:26.939294  487076 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 21:54:26.967873  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 21:54:26.995684  487076 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 21:54:26.995713  487076 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 21:54:27.032605  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 21:54:27.178511  487076 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-681393" context rescaled to 1 replicas
	I1027 21:54:27.585638  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.279976052s)
	I1027 21:54:27.585681  487076 addons.go:479] Verifying addon ingress=true in "addons-681393"
	I1027 21:54:27.585712  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.225825916s)
	I1027 21:54:27.585802  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.175202189s)
	I1027 21:54:27.585882  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174826744s)
	I1027 21:54:27.585956  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.173326464s)
	I1027 21:54:27.586096  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.169821672s)
	I1027 21:54:27.586191  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.159504064s)
	I1027 21:54:27.586223  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.158936874s)
	I1027 21:54:27.586273  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.154586466s)
	I1027 21:54:27.586357  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.144790022s)
	W1027 21:54:27.586385  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:27.586478  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.088501405s)
	I1027 21:54:27.586514  487076 retry.go:31] will retry after 244.379416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:27.587048  487076 out.go:179] * Verifying ingress addon...
	I1027 21:54:27.588189  487076 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-681393 service yakd-dashboard -n yakd-dashboard
	
	I1027 21:54:27.589608  487076 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 21:54:27.592983  487076 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W1027 21:54:27.594591  487076 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1027 21:54:27.831690  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:28.040107  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.425736077s)
	I1027 21:54:28.040133  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161021338s)
	I1027 21:54:28.040161  487076 addons.go:479] Verifying addon registry=true in "addons-681393"
	W1027 21:54:28.040156  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 21:54:28.040187  487076 retry.go:31] will retry after 258.257987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 21:54:28.040498  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.070965807s)
	I1027 21:54:28.040542  487076 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-681393"
	I1027 21:54:28.040621  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.007972038s)
	I1027 21:54:28.040803  487076 addons.go:479] Verifying addon metrics-server=true in "addons-681393"
	I1027 21:54:28.042231  487076 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 21:54:28.042289  487076 out.go:179] * Verifying registry addon...
	I1027 21:54:28.044981  487076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 21:54:28.045010  487076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 21:54:28.048233  487076 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 21:54:28.048256  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:28.048376  487076 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 21:54:28.048399  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:28.094416  487076 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 21:54:28.094450  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:28.298923  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1027 21:54:28.465257  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:28.465303  487076 retry.go:31] will retry after 437.855148ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:28.548535  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:28.548647  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:28.650118  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:54:28.671977  487076 node_ready.go:57] node "addons-681393" has "Ready":"False" status (will retry)
	I1027 21:54:28.904172  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:29.050059  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:29.050178  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:29.093177  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:29.548829  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:29.548892  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:29.595080  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:30.048693  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:30.048702  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:30.092733  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:30.548991  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:30.549244  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:30.649268  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:54:30.672165  487076 node_ready.go:57] node "addons-681393" has "Ready":"False" status (will retry)
	I1027 21:54:30.822047  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.523075276s)
	I1027 21:54:30.822141  487076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.91792959s)
	W1027 21:54:30.822176  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:30.822206  487076 retry.go:31] will retry after 748.96164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:31.048392  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:31.048414  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:31.093810  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:31.549354  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:31.549367  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:31.572291  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:31.650741  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:32.048847  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:32.048975  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:32.092565  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:54:32.138096  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:32.138133  487076 retry.go:31] will retry after 945.564038ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:32.548101  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:32.548242  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:32.648928  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:33.048886  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:33.049025  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:33.083879  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:33.093390  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:54:33.171720  487076 node_ready.go:57] node "addons-681393" has "Ready":"False" status (will retry)
	I1027 21:54:33.548930  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:33.549212  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 21:54:33.629875  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:33.629908  487076 retry.go:31] will retry after 1.192517493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:33.649933  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:33.754496  487076 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 21:54:33.754563  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:33.771674  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:33.887096  487076 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 21:54:33.899784  487076 addons.go:238] Setting addon gcp-auth=true in "addons-681393"
	I1027 21:54:33.899838  487076 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:54:33.900233  487076 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:54:33.917593  487076 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 21:54:33.917642  487076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:54:33.934090  487076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:54:34.031873  487076 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 21:54:34.032828  487076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 21:54:34.033656  487076 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 21:54:34.033670  487076 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 21:54:34.046837  487076 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 21:54:34.046855  487076 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 21:54:34.049196  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:34.049274  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:34.059873  487076 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 21:54:34.059891  487076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 21:54:34.072363  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 21:54:34.093630  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:34.373904  487076 addons.go:479] Verifying addon gcp-auth=true in "addons-681393"
	I1027 21:54:34.374967  487076 out.go:179] * Verifying gcp-auth addon...
	I1027 21:54:34.376435  487076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 21:54:34.378712  487076 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 21:54:34.378727  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:34.547742  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:34.547873  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:34.592748  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:34.822612  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:34.880140  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:35.047876  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:35.047882  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:35.093028  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:54:35.172087  487076 node_ready.go:57] node "addons-681393" has "Ready":"False" status (will retry)
	W1027 21:54:35.363491  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:35.363523  487076 retry.go:31] will retry after 1.901536998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:35.379375  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:35.548228  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:35.548241  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:35.592825  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:35.879850  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:36.048821  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:36.048840  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:36.093351  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:36.379752  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:36.548918  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:36.549039  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:36.593069  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:36.880354  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:37.048233  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:37.048248  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:37.093091  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:37.265807  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:37.379995  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:37.549253  487076 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 21:54:37.549279  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:37.549446  487076 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 21:54:37.549469  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:37.593475  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:37.671425  487076 node_ready.go:49] node "addons-681393" is "Ready"
	I1027 21:54:37.671454  487076 node_ready.go:38] duration metric: took 11.002633613s for node "addons-681393" to be "Ready" ...
	I1027 21:54:37.671483  487076 api_server.go:52] waiting for apiserver process to appear ...
	I1027 21:54:37.671536  487076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 21:54:37.880095  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 21:54:38.035710  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:38.035739  487076 api_server.go:72] duration metric: took 12.036553087s to wait for apiserver process to appear ...
	I1027 21:54:38.035747  487076 retry.go:31] will retry after 4.250585418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:38.035754  487076 api_server.go:88] waiting for apiserver healthz status ...
	I1027 21:54:38.035776  487076 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1027 21:54:38.040938  487076 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1027 21:54:38.042252  487076 api_server.go:141] control plane version: v1.34.1
	I1027 21:54:38.042285  487076 api_server.go:131] duration metric: took 6.521806ms to wait for apiserver health ...
	I1027 21:54:38.042297  487076 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 21:54:38.048605  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:38.048639  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:38.049728  487076 system_pods.go:59] 20 kube-system pods found
	I1027 21:54:38.049771  487076 system_pods.go:61] "amd-gpu-device-plugin-txrzm" [24503293-388b-4873-bc11-107a24f28f57] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 21:54:38.049780  487076 system_pods.go:61] "coredns-66bc5c9577-8pt79" [87832036-6af9-4dc9-9b16-1bcf3671b894] Running
	I1027 21:54:38.049795  487076 system_pods.go:61] "csi-hostpath-attacher-0" [ea66be78-f7b8-4684-b477-b41500f5e426] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 21:54:38.049814  487076 system_pods.go:61] "csi-hostpath-resizer-0" [42938fd1-8761-4f67-874e-41d6224778a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 21:54:38.049826  487076 system_pods.go:61] "csi-hostpathplugin-p5sgs" [ab3b75d3-2e4b-408e-9216-3d162a34c2d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 21:54:38.049836  487076 system_pods.go:61] "etcd-addons-681393" [4a99290c-ab6d-40bc-a014-b5e2a655d0ff] Running
	I1027 21:54:38.049842  487076 system_pods.go:61] "kindnet-5g7gz" [a82f4737-bdb6-4fc8-803d-afa31237a5a0] Running
	I1027 21:54:38.049856  487076 system_pods.go:61] "kube-apiserver-addons-681393" [013f5a64-e0b0-4aaa-bb65-8f9230b5b663] Running
	I1027 21:54:38.049865  487076 system_pods.go:61] "kube-controller-manager-addons-681393" [3b41da40-aeb6-4896-bc6d-59c3b1d565c4] Running
	I1027 21:54:38.049874  487076 system_pods.go:61] "kube-ingress-dns-minikube" [b88574d6-394b-4266-a1a1-191b7686c64e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 21:54:38.049883  487076 system_pods.go:61] "kube-proxy-9nhv5" [dbc6ef4d-5de8-4e7f-a6ee-e79d3c8afe68] Running
	I1027 21:54:38.049889  487076 system_pods.go:61] "kube-scheduler-addons-681393" [5f8387c3-53fc-4f5a-88c8-ee8f38995cf5] Running
	I1027 21:54:38.049904  487076 system_pods.go:61] "metrics-server-85b7d694d7-nkkls" [1c66ed47-adbe-4977-9533-1e61982c1a89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 21:54:38.049918  487076 system_pods.go:61] "nvidia-device-plugin-daemonset-b6l7g" [8b67eb48-9663-4ec3-80d1-e64a4bf563b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 21:54:38.049955  487076 system_pods.go:61] "registry-6b586f9694-2tqh6" [6564a666-6603-4044-a2e5-b9e4e0700c5f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 21:54:38.049975  487076 system_pods.go:61] "registry-creds-764b6fb674-c2f45" [5300554b-ec19-4eb4-b416-d72d05fb4df5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 21:54:38.049989  487076 system_pods.go:61] "registry-proxy-wx6pv" [99f13eb6-27b7-4b76-9ed8-62ee24257d3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 21:54:38.050014  487076 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gsfdg" [38c78fd4-7ab1-447c-9e61-598336101feb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:54:38.050031  487076 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n8gmp" [0d98783a-2704-44c2-b6ed-9381e131cc3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:54:38.050043  487076 system_pods.go:61] "storage-provisioner" [8c0989d1-35b5-4024-89f3-6df94b9f2d77] Running
	I1027 21:54:38.050062  487076 system_pods.go:74] duration metric: took 7.75011ms to wait for pod list to return data ...
	I1027 21:54:38.050176  487076 default_sa.go:34] waiting for default service account to be created ...
	I1027 21:54:38.053370  487076 default_sa.go:45] found service account: "default"
	I1027 21:54:38.053393  487076 default_sa.go:55] duration metric: took 3.168409ms for default service account to be created ...
	I1027 21:54:38.053404  487076 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 21:54:38.057671  487076 system_pods.go:86] 20 kube-system pods found
	I1027 21:54:38.057699  487076 system_pods.go:89] "amd-gpu-device-plugin-txrzm" [24503293-388b-4873-bc11-107a24f28f57] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 21:54:38.057706  487076 system_pods.go:89] "coredns-66bc5c9577-8pt79" [87832036-6af9-4dc9-9b16-1bcf3671b894] Running
	I1027 21:54:38.057716  487076 system_pods.go:89] "csi-hostpath-attacher-0" [ea66be78-f7b8-4684-b477-b41500f5e426] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 21:54:38.057727  487076 system_pods.go:89] "csi-hostpath-resizer-0" [42938fd1-8761-4f67-874e-41d6224778a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 21:54:38.057737  487076 system_pods.go:89] "csi-hostpathplugin-p5sgs" [ab3b75d3-2e4b-408e-9216-3d162a34c2d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 21:54:38.057746  487076 system_pods.go:89] "etcd-addons-681393" [4a99290c-ab6d-40bc-a014-b5e2a655d0ff] Running
	I1027 21:54:38.057752  487076 system_pods.go:89] "kindnet-5g7gz" [a82f4737-bdb6-4fc8-803d-afa31237a5a0] Running
	I1027 21:54:38.057760  487076 system_pods.go:89] "kube-apiserver-addons-681393" [013f5a64-e0b0-4aaa-bb65-8f9230b5b663] Running
	I1027 21:54:38.057765  487076 system_pods.go:89] "kube-controller-manager-addons-681393" [3b41da40-aeb6-4896-bc6d-59c3b1d565c4] Running
	I1027 21:54:38.057776  487076 system_pods.go:89] "kube-ingress-dns-minikube" [b88574d6-394b-4266-a1a1-191b7686c64e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 21:54:38.057781  487076 system_pods.go:89] "kube-proxy-9nhv5" [dbc6ef4d-5de8-4e7f-a6ee-e79d3c8afe68] Running
	I1027 21:54:38.057790  487076 system_pods.go:89] "kube-scheduler-addons-681393" [5f8387c3-53fc-4f5a-88c8-ee8f38995cf5] Running
	I1027 21:54:38.057797  487076 system_pods.go:89] "metrics-server-85b7d694d7-nkkls" [1c66ed47-adbe-4977-9533-1e61982c1a89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 21:54:38.057807  487076 system_pods.go:89] "nvidia-device-plugin-daemonset-b6l7g" [8b67eb48-9663-4ec3-80d1-e64a4bf563b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 21:54:38.057822  487076 system_pods.go:89] "registry-6b586f9694-2tqh6" [6564a666-6603-4044-a2e5-b9e4e0700c5f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 21:54:38.057831  487076 system_pods.go:89] "registry-creds-764b6fb674-c2f45" [5300554b-ec19-4eb4-b416-d72d05fb4df5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 21:54:38.057841  487076 system_pods.go:89] "registry-proxy-wx6pv" [99f13eb6-27b7-4b76-9ed8-62ee24257d3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 21:54:38.057850  487076 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gsfdg" [38c78fd4-7ab1-447c-9e61-598336101feb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:54:38.057862  487076 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n8gmp" [0d98783a-2704-44c2-b6ed-9381e131cc3d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 21:54:38.057867  487076 system_pods.go:89] "storage-provisioner" [8c0989d1-35b5-4024-89f3-6df94b9f2d77] Running
	I1027 21:54:38.057879  487076 system_pods.go:126] duration metric: took 4.46821ms to wait for k8s-apps to be running ...
	I1027 21:54:38.057890  487076 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 21:54:38.057956  487076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 21:54:38.075036  487076 system_svc.go:56] duration metric: took 17.132507ms WaitForService to wait for kubelet
	I1027 21:54:38.075075  487076 kubeadm.go:587] duration metric: took 12.075889968s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 21:54:38.075102  487076 node_conditions.go:102] verifying NodePressure condition ...
	I1027 21:54:38.078220  487076 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 21:54:38.078257  487076 node_conditions.go:123] node cpu capacity is 8
	I1027 21:54:38.078275  487076 node_conditions.go:105] duration metric: took 3.167192ms to run NodePressure ...
	I1027 21:54:38.078291  487076 start.go:242] waiting for startup goroutines ...
	I1027 21:54:38.146246  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:38.380506  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:38.548682  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:38.548859  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:38.592828  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:38.880391  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:39.049055  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:39.049185  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:39.150140  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:39.380852  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:39.549018  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:39.549089  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:39.592694  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:39.880081  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:40.048263  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:40.048511  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:40.093423  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:40.381193  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:40.548913  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:40.549039  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:40.593974  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:40.880376  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:41.048759  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:41.048974  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:41.150352  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:41.380055  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:41.548050  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:41.548172  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:41.593288  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:41.880737  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:42.049261  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:42.049337  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:42.093479  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:42.286611  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:42.380300  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:42.548717  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:42.548880  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:42.592714  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:42.880062  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 21:54:43.022283  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:43.022319  487076 retry.go:31] will retry after 2.511992341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:43.048388  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:43.048554  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:43.093885  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:43.380058  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:43.549840  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:43.550095  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:43.650747  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:43.880703  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:44.049756  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:44.049808  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:44.092793  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:44.380213  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:44.548266  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:44.548273  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:44.592850  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:44.879559  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:45.048770  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:45.048864  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:45.092681  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:45.379876  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:45.535094  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:45.549390  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:45.549520  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:45.593452  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:45.880707  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:46.052082  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:46.052777  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:46.094654  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:46.380595  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 21:54:46.436466  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:46.436504  487076 retry.go:31] will retry after 5.042254322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:46.549503  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:46.549590  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:46.593854  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:46.880328  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:47.074301  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:47.074384  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:47.093349  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:47.380702  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:47.549307  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:47.549565  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:47.593040  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:47.881074  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:48.048787  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:48.048811  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:48.093285  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:48.381213  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:48.549034  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:48.549138  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:48.593553  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:48.880068  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:49.048804  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:49.049100  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:49.094056  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:49.380969  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:49.549132  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:49.549281  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:49.593027  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:49.880696  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:50.050077  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:50.050186  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:50.093833  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:50.380047  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:50.549937  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:50.550119  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:50.651577  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:50.880065  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:51.049590  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:51.049793  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:51.093433  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:51.379870  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:51.478988  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:54:51.548966  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:51.549032  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:51.593242  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:51.880325  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:52.049563  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:52.049650  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 21:54:52.058878  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:52.058916  487076 retry.go:31] will retry after 8.574760051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:54:52.093439  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:52.379498  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:52.548666  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:52.548779  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:52.594163  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:52.880159  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:53.048268  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:53.048332  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:53.093496  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:53.379961  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:53.549067  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:53.549293  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:53.592874  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:53.880772  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:54.049068  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:54.049189  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:54.093117  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:54.381017  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:54.549786  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:54.550665  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:54.593286  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:54.879969  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:55.048795  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:55.048939  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:55.092265  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:55.380597  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:55.548847  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:55.548999  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:55.592560  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:55.879367  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:56.048571  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:56.048643  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:56.093021  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:56.380090  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:56.548204  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:56.548249  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:56.592892  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:56.879663  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:57.049061  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:57.049274  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:57.093814  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:57.379987  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:57.549339  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:57.549408  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:57.593877  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:57.880333  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:58.049554  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:58.049649  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:58.150519  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:58.380690  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:58.549106  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:58.549182  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:58.592792  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:58.880000  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:59.048400  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:59.048899  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:59.093797  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:59.380813  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:54:59.548906  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:54:59.549108  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:54:59.592921  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:54:59.880646  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:00.049180  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:00.049238  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:00.150362  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:00.380727  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:00.548697  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:00.548808  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:00.593564  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:00.634688  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:55:00.886398  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:01.049111  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:01.049516  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:01.093676  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 21:55:01.260311  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:55:01.260351  487076 retry.go:31] will retry after 19.680128198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:55:01.380617  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:01.549466  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:01.549493  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:01.593901  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:01.880106  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:02.049621  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:02.050195  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:02.151265  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:02.380056  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:02.548074  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:02.548180  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:02.592730  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:02.879662  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:03.048752  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:03.048814  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:03.092583  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:03.379519  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:03.548508  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:03.548573  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:03.593008  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:03.879898  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:04.049697  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:04.049852  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:04.094425  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:04.379549  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:04.548879  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:04.549368  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:04.593275  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:04.881367  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:05.048383  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:05.048433  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:05.093534  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:05.379848  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:05.549016  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:05.549035  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:05.593645  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:05.879792  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:06.049716  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:06.049755  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:06.150342  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:06.380397  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:06.548935  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:06.549125  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:06.592727  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:06.880609  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:07.049526  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:07.049596  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:07.150647  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:07.380027  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:07.548221  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:07.548236  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:07.592849  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:07.879781  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:08.049271  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:08.049295  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:08.093329  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:08.380804  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:08.549197  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:08.549285  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:08.593193  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:08.880471  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:09.049200  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:09.049259  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:09.093332  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:09.379725  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:09.549455  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:09.549547  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:09.593517  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:09.880289  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:10.048394  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:10.048642  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:10.093659  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:10.380099  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:10.549078  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:10.549217  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:10.593177  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:10.881773  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:11.049114  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:11.049370  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:11.094126  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:11.380768  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:11.549246  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:11.549314  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:11.593386  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:11.879938  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:12.050153  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:12.050211  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:12.093651  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:12.380091  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:12.548182  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:12.548293  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:12.593276  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:12.880527  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:13.048896  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 21:55:13.049077  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:13.093275  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:13.381037  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:13.549880  487076 kapi.go:107] duration metric: took 45.504869179s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 21:55:13.550165  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:13.593038  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:13.880451  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:14.049561  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:14.093757  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:14.381109  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:14.548452  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:14.592963  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:14.880874  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:15.050125  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:15.093467  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:15.379456  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:15.548933  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:15.592917  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:15.880469  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:16.049734  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:16.150399  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:16.380974  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:16.549369  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:16.593684  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:16.882332  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:17.051924  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:17.097792  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:17.382073  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:17.550016  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:17.594163  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:17.880575  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:18.050229  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:18.093701  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:18.379569  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:18.549214  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:18.593888  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:18.880679  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:19.049234  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:19.093887  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:19.380648  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:19.549380  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:19.593868  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:19.880882  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:20.050104  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:20.093253  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:20.380439  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:20.548851  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:20.594190  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:20.880778  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:20.940983  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 21:55:21.048666  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:21.093914  487076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 21:55:21.380825  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 21:55:21.495373  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:55:21.495424  487076 retry.go:31] will retry after 27.181488911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 21:55:21.548498  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:21.593103  487076 kapi.go:107] duration metric: took 54.003494051s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 21:55:21.880328  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:22.048448  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:22.379833  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:22.549093  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:22.880397  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:23.048765  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:23.379828  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:23.549215  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:23.880535  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 21:55:24.049243  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:24.380745  487076 kapi.go:107] duration metric: took 50.004302519s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 21:55:24.381570  487076 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-681393 cluster.
	I1027 21:55:24.382744  487076 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 21:55:24.383980  487076 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 21:55:24.550099  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:25.049386  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:25.549513  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:26.129220  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:26.548959  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:27.049120  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:27.548575  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:28.049084  487076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 21:55:28.549922  487076 kapi.go:107] duration metric: took 1m0.504939819s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 21:55:48.678804  487076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1027 21:55:49.239863  487076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 21:55:49.240006  487076 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 21:55:49.241280  487076 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, storage-provisioner, ingress-dns, nvidia-device-plugin, registry-creds, yakd, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1027 21:55:49.242187  487076 addons.go:514] duration metric: took 1m23.242980308s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin storage-provisioner ingress-dns nvidia-device-plugin registry-creds yakd storage-provisioner-rancher metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1027 21:55:49.242249  487076 start.go:247] waiting for cluster config update ...
	I1027 21:55:49.242276  487076 start.go:256] writing updated cluster config ...
	I1027 21:55:49.242629  487076 ssh_runner.go:195] Run: rm -f paused
	I1027 21:55:49.246829  487076 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 21:55:49.250827  487076 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8pt79" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.255806  487076 pod_ready.go:94] pod "coredns-66bc5c9577-8pt79" is "Ready"
	I1027 21:55:49.255831  487076 pod_ready.go:86] duration metric: took 4.974565ms for pod "coredns-66bc5c9577-8pt79" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.258000  487076 pod_ready.go:83] waiting for pod "etcd-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.261876  487076 pod_ready.go:94] pod "etcd-addons-681393" is "Ready"
	I1027 21:55:49.261899  487076 pod_ready.go:86] duration metric: took 3.87761ms for pod "etcd-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.263770  487076 pod_ready.go:83] waiting for pod "kube-apiserver-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.267327  487076 pod_ready.go:94] pod "kube-apiserver-addons-681393" is "Ready"
	I1027 21:55:49.267348  487076 pod_ready.go:86] duration metric: took 3.55949ms for pod "kube-apiserver-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.269155  487076 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.650974  487076 pod_ready.go:94] pod "kube-controller-manager-addons-681393" is "Ready"
	I1027 21:55:49.651006  487076 pod_ready.go:86] duration metric: took 381.83076ms for pod "kube-controller-manager-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:49.851729  487076 pod_ready.go:83] waiting for pod "kube-proxy-9nhv5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:50.251484  487076 pod_ready.go:94] pod "kube-proxy-9nhv5" is "Ready"
	I1027 21:55:50.251517  487076 pod_ready.go:86] duration metric: took 399.75771ms for pod "kube-proxy-9nhv5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:50.451444  487076 pod_ready.go:83] waiting for pod "kube-scheduler-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:50.850837  487076 pod_ready.go:94] pod "kube-scheduler-addons-681393" is "Ready"
	I1027 21:55:50.850914  487076 pod_ready.go:86] duration metric: took 399.399412ms for pod "kube-scheduler-addons-681393" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 21:55:50.850963  487076 pod_ready.go:40] duration metric: took 1.604073115s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 21:55:50.898674  487076 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 21:55:50.901032  487076 out.go:179] * Done! kubectl is now configured to use "addons-681393" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 21:55:27 addons-681393 crio[775]: time="2025-10-27T21:55:27.39878938Z" level=info msg="Starting container: 2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a" id=37490885-f660-4a50-8926-8632dd657e9a name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 21:55:27 addons-681393 crio[775]: time="2025-10-27T21:55:27.402175968Z" level=info msg="Started container" PID=6403 containerID=2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a description=kube-system/csi-hostpathplugin-p5sgs/csi-snapshotter id=37490885-f660-4a50-8926-8632dd657e9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a430058a2576b0fdba0de85c5c49507de5d60b4000d8b2c29c3bf49818de76d
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.746617991Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4e777dd1-d0f1-474d-ba14-93df68d4294b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.746725042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.752928799Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8b8f58837bf171324d5dfe590d5434dd7ce417c7c8f6d7e863b50b766e0b3271 UID:d42e239b-3156-4365-aa06-9d3e832e54db NetNS:/var/run/netns/6c5993a2-c132-4411-8477-92e8dcc52a50 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000518f38}] Aliases:map[]}"
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.752983234Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.763474769Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8b8f58837bf171324d5dfe590d5434dd7ce417c7c8f6d7e863b50b766e0b3271 UID:d42e239b-3156-4365-aa06-9d3e832e54db NetNS:/var/run/netns/6c5993a2-c132-4411-8477-92e8dcc52a50 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000518f38}] Aliases:map[]}"
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.76360574Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.764614857Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.765880387Z" level=info msg="Ran pod sandbox 8b8f58837bf171324d5dfe590d5434dd7ce417c7c8f6d7e863b50b766e0b3271 with infra container: default/busybox/POD" id=4e777dd1-d0f1-474d-ba14-93df68d4294b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.767377707Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f0f66be-a35c-421c-bd7c-f5841143ae40 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.767518812Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0f0f66be-a35c-421c-bd7c-f5841143ae40 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.767569577Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0f0f66be-a35c-421c-bd7c-f5841143ae40 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.768283932Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f6c2c5f9-ac69-43ad-a97f-04c9bc6562db name=/runtime.v1.ImageService/PullImage
	Oct 27 21:55:51 addons-681393 crio[775]: time="2025-10-27T21:55:51.770197755Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 21:55:54 addons-681393 crio[775]: time="2025-10-27T21:55:54.710502858Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f6c2c5f9-ac69-43ad-a97f-04c9bc6562db name=/runtime.v1.ImageService/PullImage
	Oct 27 21:55:54 addons-681393 crio[775]: time="2025-10-27T21:55:54.711130145Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e3ecb873-7e9d-4bd8-bcce-a49c1d0d6cde name=/runtime.v1.ImageService/ImageStatus
	Oct 27 21:55:54 addons-681393 crio[775]: time="2025-10-27T21:55:54.712579425Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=35cda130-db60-414a-a884-82a3676ed363 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 21:55:54 addons-681393 crio[775]: time="2025-10-27T21:55:54.715502724Z" level=info msg="Creating container: default/busybox/busybox" id=28f78347-1ec7-4385-a568-8c0c43b73142 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 21:55:54 addons-681393 crio[775]: time="2025-10-27T21:55:54.71563895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 21:55:54 addons-681393 crio[775]: time="2025-10-27T21:55:54.72253358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 21:55:54 addons-681393 crio[775]: time="2025-10-27T21:55:54.723107732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 21:55:54 addons-681393 crio[775]: time="2025-10-27T21:55:54.751794739Z" level=info msg="Created container 2717d9026043f9949c33d5e6e970fb0e771a48258e08c64bca89c89a933e1c64: default/busybox/busybox" id=28f78347-1ec7-4385-a568-8c0c43b73142 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 21:55:54 addons-681393 crio[775]: time="2025-10-27T21:55:54.75239734Z" level=info msg="Starting container: 2717d9026043f9949c33d5e6e970fb0e771a48258e08c64bca89c89a933e1c64" id=c4d2bffa-8f8b-485f-96e3-be7ded84cf25 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 21:55:54 addons-681393 crio[775]: time="2025-10-27T21:55:54.754066116Z" level=info msg="Started container" PID=6678 containerID=2717d9026043f9949c33d5e6e970fb0e771a48258e08c64bca89c89a933e1c64 description=default/busybox/busybox id=c4d2bffa-8f8b-485f-96e3-be7ded84cf25 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b8f58837bf171324d5dfe590d5434dd7ce417c7c8f6d7e863b50b766e0b3271
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	2717d9026043f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   8b8f58837bf17       busybox                                     default
	2010575178c32       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          34 seconds ago       Running             csi-snapshotter                          0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	59650918c62fb       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          35 seconds ago       Running             csi-provisioner                          0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	85ee742586776       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            37 seconds ago       Running             liveness-probe                           0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	f24d2cb4a2b58       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           37 seconds ago       Running             hostpath                                 0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	182c62dbb6d73       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 38 seconds ago       Running             gcp-auth                                 0                   fb5c9c6913299       gcp-auth-78565c9fb4-mqt6k                   gcp-auth
	3de48bac49627       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             41 seconds ago       Running             controller                               0                   3337cb68fea08       ingress-nginx-controller-675c5ddd98-glp28   ingress-nginx
	6467e0e7a8c5b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                45 seconds ago       Running             node-driver-registrar                    0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	ff45bb62e13ce       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            46 seconds ago       Running             gadget                                   0                   e0da334faea8e       gadget-g4nwh                                gadget
	f5f70b0c5ec76       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              50 seconds ago       Running             registry-proxy                           0                   50bf471b22bee       registry-proxy-wx6pv                        kube-system
	9f32528dcb836       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     53 seconds ago       Running             amd-gpu-device-plugin                    0                   a6ada1620976a       amd-gpu-device-plugin-txrzm                 kube-system
	45df9e906c25d       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             54 seconds ago       Exited              patch                                    1                   f89738a564e3a       gcp-auth-certs-patch-72cvl                  gcp-auth
	3e5650d8abeb4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   55 seconds ago       Exited              create                                   0                   6789fa7833733       gcp-auth-certs-create-djcmb                 gcp-auth
	153647beb1594       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   55 seconds ago       Running             csi-external-health-monitor-controller   0                   8a430058a2576       csi-hostpathplugin-p5sgs                    kube-system
	5ddf0325ff467       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     56 seconds ago       Running             nvidia-device-plugin-ctr                 0                   ad5e56c614024       nvidia-device-plugin-daemonset-b6l7g        kube-system
	b847234d4f511       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   2bc2cd958bad3       snapshot-controller-7d9fbc56b8-n8gmp        kube-system
	4b58171ccaea0       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   6b268f837dfbc       csi-hostpath-attacher-0                     kube-system
	f55e91ef28796       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   a7bdc864e3136       csi-hostpath-resizer-0                      kube-system
	0a08d08180b3c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   741db9095fe2e       snapshot-controller-7d9fbc56b8-gsfdg        kube-system
	1b44a338b5f1a       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   dcaaed5f1068c       yakd-dashboard-5ff678cb9-2qn6r              yakd-dashboard
	aa4d992979360       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             About a minute ago   Exited              patch                                    2                   51c82d56a6523       ingress-nginx-admission-patch-tglxq         ingress-nginx
	28e0d7defa53b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   cca1620056ab2       ingress-nginx-admission-create-crz97        ingress-nginx
	fb54ab1a61dad       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   c8bcf6008e0df       registry-6b586f9694-2tqh6                   kube-system
	7d170ca1d55a9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   7887c01d60310       local-path-provisioner-648f6765c9-nxsbb     local-path-storage
	00cc26010baa4       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   d76a1ca895668       kube-ingress-dns-minikube                   kube-system
	1f9c8cd6b818b       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   6cf54e016b481       cloud-spanner-emulator-86bd5cbb97-mjqsc     default
	37c2044b18ebd       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   cb12f969b1cd9       metrics-server-85b7d694d7-nkkls             kube-system
	49d0fe83e58c6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   996ea730acecc       coredns-66bc5c9577-8pt79                    kube-system
	bd12cfcd64231       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   701d6c4bf6180       storage-provisioner                         kube-system
	27e7e39745889       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   1a0cafbdf33cb       kube-proxy-9nhv5                            kube-system
	65ad03529a586       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   875fe3a06b628       kindnet-5g7gz                               kube-system
	768d42a191bfa       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   cf1a0de9d9891       kube-controller-manager-addons-681393       kube-system
	9ca7e0d969e10       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   876ee064ef2dc       kube-apiserver-addons-681393                kube-system
	c7060ff537769       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   999d55a6c9def       etcd-addons-681393                          kube-system
	6924a158f2354       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   2efd01e5d650f       kube-scheduler-addons-681393                kube-system
	
	
	==> coredns [49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf] <==
	[INFO] 10.244.0.19:59308 - 40604 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.002565577s
	[INFO] 10.244.0.19:33424 - 48112 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000140762s
	[INFO] 10.244.0.19:33424 - 47751 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.00014364s
	[INFO] 10.244.0.19:48064 - 43251 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000067926s
	[INFO] 10.244.0.19:48064 - 42774 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000049086s
	[INFO] 10.244.0.19:35153 - 32768 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000046456s
	[INFO] 10.244.0.19:35153 - 33045 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.0000904s
	[INFO] 10.244.0.19:33891 - 13069 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133663s
	[INFO] 10.244.0.19:33891 - 13312 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000162682s
	[INFO] 10.244.0.22:47399 - 5755 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000247859s
	[INFO] 10.244.0.22:48203 - 39070 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000322231s
	[INFO] 10.244.0.22:60760 - 10449 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000148556s
	[INFO] 10.244.0.22:58892 - 30718 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000214277s
	[INFO] 10.244.0.22:52232 - 18667 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130793s
	[INFO] 10.244.0.22:51914 - 58131 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164547s
	[INFO] 10.244.0.22:44357 - 42706 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004508154s
	[INFO] 10.244.0.22:60382 - 48623 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004657348s
	[INFO] 10.244.0.22:38702 - 61038 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006389863s
	[INFO] 10.244.0.22:39975 - 46262 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.007410627s
	[INFO] 10.244.0.22:48425 - 2403 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003616018s
	[INFO] 10.244.0.22:46445 - 27836 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005057748s
	[INFO] 10.244.0.22:49343 - 25957 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004746444s
	[INFO] 10.244.0.22:36556 - 41327 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005947448s
	[INFO] 10.244.0.22:48922 - 60592 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001206754s
	[INFO] 10.244.0.22:46013 - 35587 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001380023s
	
	
	==> describe nodes <==
	Name:               addons-681393
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-681393
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=addons-681393
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T21_54_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-681393
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-681393"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 21:54:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-681393
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 21:56:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 21:55:52 +0000   Mon, 27 Oct 2025 21:54:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 21:55:52 +0000   Mon, 27 Oct 2025 21:54:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 21:55:52 +0000   Mon, 27 Oct 2025 21:54:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 21:55:52 +0000   Mon, 27 Oct 2025 21:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-681393
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2756eef5-641f-4a79-a5ec-5fcab8f11b6e
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-mjqsc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  gadget                      gadget-g4nwh                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  gcp-auth                    gcp-auth-78565c9fb4-mqt6k                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-glp28    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         95s
	  kube-system                 amd-gpu-device-plugin-txrzm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-66bc5c9577-8pt79                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     97s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 csi-hostpathplugin-p5sgs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 etcd-addons-681393                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         103s
	  kube-system                 kindnet-5g7gz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      97s
	  kube-system                 kube-apiserver-addons-681393                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-addons-681393        200m (2%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-proxy-9nhv5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-scheduler-addons-681393                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 metrics-server-85b7d694d7-nkkls              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         95s
	  kube-system                 nvidia-device-plugin-daemonset-b6l7g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 registry-6b586f9694-2tqh6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 registry-creds-764b6fb674-c2f45              0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 registry-proxy-wx6pv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 snapshot-controller-7d9fbc56b8-gsfdg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 snapshot-controller-7d9fbc56b8-n8gmp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  local-path-storage          local-path-provisioner-648f6765c9-nxsbb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2qn6r               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 95s   kube-proxy       
	  Normal  Starting                 103s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s  kubelet          Node addons-681393 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s  kubelet          Node addons-681393 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s  kubelet          Node addons-681393 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           98s   node-controller  Node addons-681393 event: Registered Node addons-681393 in Controller
	  Normal  NodeReady                85s   kubelet          Node addons-681393 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 ee f2 22 5d 20 08 06
	[Oct27 21:39] IPv4: martian source 10.244.0.1 from 10.244.0.190, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 44 da 06 a2 63 08 06
	[ +38.536320] IPv4: martian source 10.244.0.1 from 10.244.0.191, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 92 84 da d1 2d 5f 08 06
	[Oct27 21:40] IPv4: martian source 10.244.0.1 from 10.244.0.193, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 21 1d 5e 78 db 08 06
	[Oct27 21:42] IPv4: martian source 10.244.0.1 from 10.244.0.200, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 ce 46 62 f7 87 08 06
	[Oct27 21:43] IPv4: martian source 10.244.0.1 from 10.244.0.202, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 28 36 47 80 0d 08 06
	[  +0.003585] IPv4: martian source 10.244.0.1 from 10.244.0.201, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 9c 16 16 08 ad 08 06
	[Oct27 21:44] IPv4: martian source 10.244.0.1 from 10.244.0.203, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a dd b5 d9 f9 72 08 06
	[Oct27 21:45] IPv4: martian source 10.244.0.1 from 10.244.0.204, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 8e f4 6b 51 46 08 06
	[ +17.246524] IPv4: martian source 10.244.0.1 from 10.244.0.205, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 37 b4 80 26 f0 08 06
	[ +15.137114] IPv4: martian source 10.244.0.1 from 10.244.0.206, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa a4 bb 24 4f d3 08 06
	[Oct27 21:46] IPv4: martian source 10.244.0.1 from 10.244.0.207, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	
	
	==> etcd [c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267] <==
	{"level":"warn","ts":"2025-10-27T21:54:17.083134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.090568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.103095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.109326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.115731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.122010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.128996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.135309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.141832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.147756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.154074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.161103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.167464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.174447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.190674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.197216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.203511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:17.259792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:28.549366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:28.556123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35778","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T21:54:47.071185Z","caller":"traceutil/trace.go:172","msg":"trace[356343482] transaction","detail":"{read_only:false; response_revision:969; number_of_response:1; }","duration":"104.174849ms","start":"2025-10-27T21:54:46.966981Z","end":"2025-10-27T21:54:47.071156Z","steps":["trace[356343482] 'process raft request'  (duration: 71.691714ms)","trace[356343482] 'compare'  (duration: 32.352529ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T21:54:54.808923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:54.816358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:54.828008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T21:54:54.834303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37762","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [182c62dbb6d731bd7488f43383d3ab33b72c3391b631c2c3140cc458b433c19e] <==
	2025/10/27 21:55:24 GCP Auth Webhook started!
	2025/10/27 21:55:51 Ready to marshal response ...
	2025/10/27 21:55:51 Ready to write response ...
	2025/10/27 21:55:51 Ready to marshal response ...
	2025/10/27 21:55:51 Ready to write response ...
	2025/10/27 21:55:51 Ready to marshal response ...
	2025/10/27 21:55:51 Ready to write response ...
	
	
	==> kernel <==
	 21:56:02 up  1:38,  0 user,  load average: 0.60, 0.95, 18.19
	Linux addons-681393 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88] <==
	I1027 21:54:26.950776       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 21:54:26.951238       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 21:54:26.951320       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 21:54:26.953139       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 21:54:28.551822       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 21:54:28.551860       1 metrics.go:72] Registering metrics
	I1027 21:54:28.551937       1 controller.go:711] "Syncing nftables rules"
	I1027 21:54:36.918570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:54:36.918647       1 main.go:301] handling current node
	I1027 21:54:46.917059       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:54:46.917110       1 main.go:301] handling current node
	I1027 21:54:56.916982       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:54:56.917051       1 main.go:301] handling current node
	I1027 21:55:06.917677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:55:06.917717       1 main.go:301] handling current node
	I1027 21:55:16.918821       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:55:16.918879       1 main.go:301] handling current node
	I1027 21:55:26.917053       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:55:26.917099       1 main.go:301] handling current node
	I1027 21:55:36.918317       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:55:36.918367       1 main.go:301] handling current node
	I1027 21:55:46.921860       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:55:46.921894       1 main.go:301] handling current node
	I1027 21:55:56.917804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 21:55:56.917842       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1027 21:54:40.957268       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.192.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.192.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.192.22:443: connect: connection refused" logger="UnhandledError"
	W1027 21:54:41.958848       1 handler_proxy.go:99] no RequestInfo found in the context
	W1027 21:54:41.958848       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 21:54:41.958928       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1027 21:54:41.958968       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1027 21:54:41.958977       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1027 21:54:41.960142       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1027 21:54:45.970362       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.192.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.192.22:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1027 21:54:45.973597       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 21:54:45.973655       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1027 21:54:45.996644       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1027 21:54:54.808874       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 21:54:54.816301       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 21:54:54.828015       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 21:54:54.834265       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1027 21:56:00.591419       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40994: use of closed network connection
	E1027 21:56:00.749642       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41028: use of closed network connection
	
	
	==> kube-controller-manager [768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74] <==
	I1027 21:54:24.791805       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 21:54:24.792018       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 21:54:24.792147       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 21:54:24.792336       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 21:54:24.792352       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 21:54:24.792490       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-681393"
	I1027 21:54:24.792554       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 21:54:24.792577       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 21:54:24.792593       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 21:54:24.792668       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 21:54:24.793363       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 21:54:24.793378       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 21:54:24.793435       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 21:54:24.793500       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 21:54:24.795730       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 21:54:24.795735       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 21:54:24.802421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 21:54:24.814247       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 21:54:27.549278       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1027 21:54:39.793794       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1027 21:54:54.801664       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 21:54:54.801743       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 21:54:54.821967       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 21:54:54.902676       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 21:54:54.922908       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12] <==
	I1027 21:54:26.369218       1 server_linux.go:53] "Using iptables proxy"
	I1027 21:54:26.641245       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 21:54:26.748247       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 21:54:26.748315       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 21:54:26.752694       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 21:54:27.005603       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 21:54:27.005791       1 server_linux.go:132] "Using iptables Proxier"
	I1027 21:54:27.013349       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 21:54:27.018523       1 server.go:527] "Version info" version="v1.34.1"
	I1027 21:54:27.020173       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 21:54:27.033556       1 config.go:200] "Starting service config controller"
	I1027 21:54:27.033589       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 21:54:27.034031       1 config.go:106] "Starting endpoint slice config controller"
	I1027 21:54:27.034051       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 21:54:27.034098       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 21:54:27.034106       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 21:54:27.034791       1 config.go:309] "Starting node config controller"
	I1027 21:54:27.034813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 21:54:27.034821       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 21:54:27.139832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 21:54:27.142568       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 21:54:27.140582       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567] <==
	E1027 21:54:17.672293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 21:54:17.672323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 21:54:17.672392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 21:54:17.672604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 21:54:17.672613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 21:54:17.672027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 21:54:17.673974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 21:54:17.674032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 21:54:17.674059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 21:54:17.673993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 21:54:17.673999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 21:54:17.674096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 21:54:17.674163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 21:54:18.480733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 21:54:18.502973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 21:54:18.625684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 21:54:18.670214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 21:54:18.686424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 21:54:18.771840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 21:54:18.780159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 21:54:18.841443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 21:54:18.864756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 21:54:18.875118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 21:54:18.897274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1027 21:54:21.170138       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 21:55:10 addons-681393 kubelet[1317]: I1027 21:55:10.083451    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-txrzm" podStartSLOduration=1.757023957 podStartE2EDuration="33.083427557s" podCreationTimestamp="2025-10-27 21:54:37 +0000 UTC" firstStartedPulling="2025-10-27 21:54:37.770889237 +0000 UTC m=+18.027632353" lastFinishedPulling="2025-10-27 21:55:09.097292835 +0000 UTC m=+49.354035953" observedRunningTime="2025-10-27 21:55:10.082100335 +0000 UTC m=+50.338843484" watchObservedRunningTime="2025-10-27 21:55:10.083427557 +0000 UTC m=+50.340170698"
	Oct 27 21:55:10 addons-681393 kubelet[1317]: I1027 21:55:10.239095    1317 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw2nb\" (UniqueName: \"kubernetes.io/projected/be16ac11-813f-4898-a194-4d00da6a9fc6-kube-api-access-qw2nb\") pod \"be16ac11-813f-4898-a194-4d00da6a9fc6\" (UID: \"be16ac11-813f-4898-a194-4d00da6a9fc6\") "
	Oct 27 21:55:10 addons-681393 kubelet[1317]: I1027 21:55:10.241473    1317 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be16ac11-813f-4898-a194-4d00da6a9fc6-kube-api-access-qw2nb" (OuterVolumeSpecName: "kube-api-access-qw2nb") pod "be16ac11-813f-4898-a194-4d00da6a9fc6" (UID: "be16ac11-813f-4898-a194-4d00da6a9fc6"). InnerVolumeSpecName "kube-api-access-qw2nb". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 21:55:10 addons-681393 kubelet[1317]: I1027 21:55:10.339643    1317 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qw2nb\" (UniqueName: \"kubernetes.io/projected/be16ac11-813f-4898-a194-4d00da6a9fc6-kube-api-access-qw2nb\") on node \"addons-681393\" DevicePath \"\""
	Oct 27 21:55:11 addons-681393 kubelet[1317]: I1027 21:55:11.080779    1317 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f89738a564e3a17c253e9072c63e7a9b022806b27f61e31157d8cf0a608f934f"
	Oct 27 21:55:11 addons-681393 kubelet[1317]: I1027 21:55:11.081141    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-txrzm" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:55:13 addons-681393 kubelet[1317]: I1027 21:55:13.090327    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wx6pv" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:55:13 addons-681393 kubelet[1317]: I1027 21:55:13.100684    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-wx6pv" podStartSLOduration=1.6677188109999999 podStartE2EDuration="36.10065786s" podCreationTimestamp="2025-10-27 21:54:37 +0000 UTC" firstStartedPulling="2025-10-27 21:54:37.800047383 +0000 UTC m=+18.056790511" lastFinishedPulling="2025-10-27 21:55:12.232986441 +0000 UTC m=+52.489729560" observedRunningTime="2025-10-27 21:55:13.099891354 +0000 UTC m=+53.356634490" watchObservedRunningTime="2025-10-27 21:55:13.10065786 +0000 UTC m=+53.357400994"
	Oct 27 21:55:14 addons-681393 kubelet[1317]: I1027 21:55:14.094701    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wx6pv" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 21:55:17 addons-681393 kubelet[1317]: I1027 21:55:17.130369    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-g4nwh" podStartSLOduration=17.630249307 podStartE2EDuration="50.130343986s" podCreationTimestamp="2025-10-27 21:54:27 +0000 UTC" firstStartedPulling="2025-10-27 21:54:43.559793344 +0000 UTC m=+23.816536459" lastFinishedPulling="2025-10-27 21:55:16.059888021 +0000 UTC m=+56.316631138" observedRunningTime="2025-10-27 21:55:17.12818831 +0000 UTC m=+57.384931469" watchObservedRunningTime="2025-10-27 21:55:17.130343986 +0000 UTC m=+57.387087121"
	Oct 27 21:55:21 addons-681393 kubelet[1317]: I1027 21:55:21.140882    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-glp28" podStartSLOduration=43.013414101 podStartE2EDuration="54.140858865s" podCreationTimestamp="2025-10-27 21:54:27 +0000 UTC" firstStartedPulling="2025-10-27 21:55:09.454138344 +0000 UTC m=+49.710881474" lastFinishedPulling="2025-10-27 21:55:20.581583102 +0000 UTC m=+60.838326238" observedRunningTime="2025-10-27 21:55:21.140051497 +0000 UTC m=+61.396794637" watchObservedRunningTime="2025-10-27 21:55:21.140858865 +0000 UTC m=+61.397602001"
	Oct 27 21:55:24 addons-681393 kubelet[1317]: I1027 21:55:24.154564    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-mqt6k" podStartSLOduration=35.627184499 podStartE2EDuration="50.154535977s" podCreationTimestamp="2025-10-27 21:54:34 +0000 UTC" firstStartedPulling="2025-10-27 21:55:09.458917478 +0000 UTC m=+49.715660593" lastFinishedPulling="2025-10-27 21:55:23.986268953 +0000 UTC m=+64.243012071" observedRunningTime="2025-10-27 21:55:24.152797382 +0000 UTC m=+64.409540518" watchObservedRunningTime="2025-10-27 21:55:24.154535977 +0000 UTC m=+64.411279113"
	Oct 27 21:55:25 addons-681393 kubelet[1317]: I1027 21:55:25.879412    1317 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 27 21:55:25 addons-681393 kubelet[1317]: I1027 21:55:25.879452    1317 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 27 21:55:28 addons-681393 kubelet[1317]: I1027 21:55:28.186507    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-p5sgs" podStartSLOduration=1.592668864 podStartE2EDuration="51.186479885s" podCreationTimestamp="2025-10-27 21:54:37 +0000 UTC" firstStartedPulling="2025-10-27 21:54:37.760067384 +0000 UTC m=+18.016810511" lastFinishedPulling="2025-10-27 21:55:27.353878418 +0000 UTC m=+67.610621532" observedRunningTime="2025-10-27 21:55:28.185631493 +0000 UTC m=+68.442374652" watchObservedRunningTime="2025-10-27 21:55:28.186479885 +0000 UTC m=+68.443223210"
	Oct 27 21:55:41 addons-681393 kubelet[1317]: E1027 21:55:41.192366    1317 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 27 21:55:41 addons-681393 kubelet[1317]: E1027 21:55:41.192551    1317 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5300554b-ec19-4eb4-b416-d72d05fb4df5-gcr-creds podName:5300554b-ec19-4eb4-b416-d72d05fb4df5 nodeName:}" failed. No retries permitted until 2025-10-27 21:56:45.192514489 +0000 UTC m=+145.449257625 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5300554b-ec19-4eb4-b416-d72d05fb4df5-gcr-creds") pod "registry-creds-764b6fb674-c2f45" (UID: "5300554b-ec19-4eb4-b416-d72d05fb4df5") : secret "registry-creds-gcr" not found
	Oct 27 21:55:41 addons-681393 kubelet[1317]: I1027 21:55:41.836605    1317 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3a6e45-53cf-4f90-af48-6942bda4af3a" path="/var/lib/kubelet/pods/9d3a6e45-53cf-4f90-af48-6942bda4af3a/volumes"
	Oct 27 21:55:41 addons-681393 kubelet[1317]: I1027 21:55:41.837235    1317 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be16ac11-813f-4898-a194-4d00da6a9fc6" path="/var/lib/kubelet/pods/be16ac11-813f-4898-a194-4d00da6a9fc6/volumes"
	Oct 27 21:55:51 addons-681393 kubelet[1317]: I1027 21:55:51.569477    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj7w8\" (UniqueName: \"kubernetes.io/projected/d42e239b-3156-4365-aa06-9d3e832e54db-kube-api-access-mj7w8\") pod \"busybox\" (UID: \"d42e239b-3156-4365-aa06-9d3e832e54db\") " pod="default/busybox"
	Oct 27 21:55:51 addons-681393 kubelet[1317]: I1027 21:55:51.569626    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d42e239b-3156-4365-aa06-9d3e832e54db-gcp-creds\") pod \"busybox\" (UID: \"d42e239b-3156-4365-aa06-9d3e832e54db\") " pod="default/busybox"
	Oct 27 21:55:55 addons-681393 kubelet[1317]: I1027 21:55:55.291024    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.346928639 podStartE2EDuration="4.290999714s" podCreationTimestamp="2025-10-27 21:55:51 +0000 UTC" firstStartedPulling="2025-10-27 21:55:51.767860598 +0000 UTC m=+92.024603713" lastFinishedPulling="2025-10-27 21:55:54.711931667 +0000 UTC m=+94.968674788" observedRunningTime="2025-10-27 21:55:55.29055701 +0000 UTC m=+95.547300146" watchObservedRunningTime="2025-10-27 21:55:55.290999714 +0000 UTC m=+95.547742850"
	Oct 27 21:56:00 addons-681393 kubelet[1317]: E1027 21:56:00.591303    1317 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50606->127.0.0.1:34281: write tcp 127.0.0.1:50606->127.0.0.1:34281: write: broken pipe
	Oct 27 21:56:00 addons-681393 kubelet[1317]: E1027 21:56:00.749621    1317 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50616->127.0.0.1:34281: write tcp 127.0.0.1:50616->127.0.0.1:34281: write: broken pipe
	Oct 27 21:56:00 addons-681393 kubelet[1317]: I1027 21:56:00.833087    1317 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-2tqh6" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d] <==
	W1027 21:55:38.105006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:40.108021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:40.112023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:42.115057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:42.118765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:44.122037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:44.128846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:46.131815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:46.137680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:48.141152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:48.144905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:50.148092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:50.153475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:52.156851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:52.160896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:54.164394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:54.170299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:56.173674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:56.177589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:58.181014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:55:58.185168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:56:00.188263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:56:00.192446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:56:02.195689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 21:56:02.199967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-681393 -n addons-681393
helpers_test.go:269: (dbg) Run:  kubectl --context addons-681393 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-crz97 ingress-nginx-admission-patch-tglxq registry-creds-764b6fb674-c2f45
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-681393 describe pod ingress-nginx-admission-create-crz97 ingress-nginx-admission-patch-tglxq registry-creds-764b6fb674-c2f45
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-681393 describe pod ingress-nginx-admission-create-crz97 ingress-nginx-admission-patch-tglxq registry-creds-764b6fb674-c2f45: exit status 1 (61.771653ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-crz97" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tglxq" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-c2f45" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-681393 describe pod ingress-nginx-admission-create-crz97 ingress-nginx-admission-patch-tglxq registry-creds-764b6fb674-c2f45: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable headlamp --alsologtostderr -v=1: exit status 11 (257.49935ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:03.404341  496470 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:03.404456  496470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:03.404461  496470 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:03.404465  496470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:03.404694  496470 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:03.404986  496470 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:03.405368  496470 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:03.405385  496470 addons.go:606] checking whether the cluster is paused
	I1027 21:56:03.405465  496470 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:03.405482  496470 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:03.405862  496470 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:03.425466  496470 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:03.425536  496470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:03.442722  496470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:03.543362  496470 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:03.543453  496470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:03.575013  496470 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:03.575036  496470 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:03.575040  496470 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:03.575043  496470 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:03.575045  496470 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:03.575048  496470 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:03.575051  496470 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:03.575053  496470 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:03.575056  496470 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:03.575061  496470 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:03.575063  496470 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:03.575066  496470 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:03.575068  496470 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:03.575070  496470 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:03.575073  496470 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:03.575077  496470 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:03.575079  496470 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:03.575083  496470 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:03.575086  496470 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:03.575088  496470 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:03.575108  496470 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:03.575110  496470 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:03.575113  496470 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:03.575115  496470 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:03.575118  496470 cri.go:89] found id: ""
	I1027 21:56:03.575158  496470 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:03.592964  496470 out.go:203] 
	W1027 21:56:03.594014  496470 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:03.594032  496470 out.go:285] * 
	* 
	W1027 21:56:03.597083  496470 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:03.598153  496470 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-mjqsc" [a8709fde-7272-40af-8b93-65b6b4e235b5] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003537509s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (249.859638ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:21.106619  498390 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:21.106899  498390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:21.106909  498390 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:21.106913  498390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:21.107134  498390 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:21.107380  498390 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:21.107707  498390 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:21.107721  498390 addons.go:606] checking whether the cluster is paused
	I1027 21:56:21.107797  498390 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:21.107809  498390 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:21.108198  498390 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:21.125790  498390 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:21.125859  498390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:21.142856  498390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:21.241924  498390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:21.242068  498390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:21.274044  498390 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:21.274080  498390 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:21.274084  498390 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:21.274087  498390 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:21.274090  498390 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:21.274095  498390 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:21.274099  498390 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:21.274105  498390 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:21.274109  498390 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:21.274130  498390 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:21.274139  498390 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:21.274144  498390 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:21.274151  498390 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:21.274156  498390 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:21.274163  498390 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:21.274170  498390 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:21.274175  498390 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:21.274181  498390 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:21.274183  498390 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:21.274185  498390 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:21.274188  498390 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:21.274190  498390 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:21.274192  498390 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:21.274195  498390 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:21.274198  498390 cri.go:89] found id: ""
	I1027 21:56:21.274250  498390 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:21.289343  498390 out.go:203] 
	W1027 21:56:21.290492  498390 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:21.290520  498390 out.go:285] * 
	* 
	W1027 21:56:21.293664  498390 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:21.294802  498390 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-681393 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-681393 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-681393 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [63f66c19-80ce-42f9-aa50-ddf1c3ee3962] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [63f66c19-80ce-42f9-aa50-ddf1c3ee3962] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [63f66c19-80ce-42f9-aa50-ddf1c3ee3962] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.003934297s
addons_test.go:967: (dbg) Run:  kubectl --context addons-681393 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 ssh "cat /opt/local-path-provisioner/pvc-0c92e78d-b0c3-4e9d-862a-de825b3f6cd6_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-681393 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-681393 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (269.38514ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:33.289568  499183 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:33.289872  499183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:33.289883  499183 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:33.289888  499183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:33.290117  499183 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:33.290393  499183 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:33.290739  499183 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:33.290756  499183 addons.go:606] checking whether the cluster is paused
	I1027 21:56:33.290834  499183 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:33.290846  499183 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:33.291236  499183 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:33.308516  499183 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:33.308589  499183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:33.325473  499183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:33.425681  499183 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:33.425766  499183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:33.456030  499183 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:33.456063  499183 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:33.456067  499183 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:33.456071  499183 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:33.456073  499183 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:33.456078  499183 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:33.456080  499183 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:33.456083  499183 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:33.456085  499183 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:33.456095  499183 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:33.456098  499183 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:33.456109  499183 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:33.456113  499183 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:33.456117  499183 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:33.456120  499183 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:33.456146  499183 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:33.456157  499183 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:33.456163  499183 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:33.456166  499183 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:33.456168  499183 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:33.456171  499183 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:33.456174  499183 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:33.456176  499183 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:33.456179  499183 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:33.456181  499183 cri.go:89] found id: ""
	I1027 21:56:33.456253  499183 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:33.476744  499183 out.go:203] 
	W1027 21:56:33.480368  499183 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:33.480397  499183 out.go:285] * 
	* 
	W1027 21:56:33.486257  499183 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:33.488318  499183 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (15.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-b6l7g" [8b67eb48-9663-4ec3-80d1-e64a4bf563b4] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003360427s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (272.86852ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:06.080904  496601 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:06.081242  496601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:06.081249  496601 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:06.081256  496601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:06.081630  496601 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:06.082052  496601 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:06.082574  496601 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:06.082598  496601 addons.go:606] checking whether the cluster is paused
	I1027 21:56:06.082727  496601 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:06.082741  496601 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:06.083408  496601 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:06.104980  496601 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:06.105050  496601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:06.124390  496601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:06.225720  496601 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:06.225822  496601 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:06.260000  496601 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:06.260016  496601 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:06.260020  496601 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:06.260023  496601 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:06.260026  496601 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:06.260029  496601 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:06.260031  496601 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:06.260034  496601 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:06.260036  496601 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:06.260041  496601 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:06.260045  496601 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:06.260049  496601 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:06.260061  496601 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:06.260065  496601 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:06.260069  496601 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:06.260074  496601 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:06.260079  496601 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:06.260086  496601 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:06.260091  496601 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:06.260095  496601 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:06.260100  496601 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:06.260107  496601 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:06.260113  496601 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:06.260117  496601 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:06.260121  496601 cri.go:89] found id: ""
	I1027 21:56:06.260168  496601 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:06.275053  496601 out.go:203] 
	W1027 21:56:06.276079  496601 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:06.276099  496601 out.go:285] * 
	* 
	W1027 21:56:06.279591  496601 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:06.280338  496601 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2qn6r" [6306d85b-dfcb-47ca-8e65-470aa586d75a] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00322789s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable yakd --alsologtostderr -v=1: exit status 11 (253.602184ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:18.137693  498165 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:18.137952  498165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:18.137963  498165 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:18.137969  498165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:18.138176  498165 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:18.138472  498165 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:18.138833  498165 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:18.138855  498165 addons.go:606] checking whether the cluster is paused
	I1027 21:56:18.138971  498165 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:18.138990  498165 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:18.139400  498165 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:18.157876  498165 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:18.157966  498165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:18.176761  498165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:18.277293  498165 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:18.277372  498165 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:18.308505  498165 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:18.308525  498165 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:18.308529  498165 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:18.308532  498165 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:18.308534  498165 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:18.308537  498165 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:18.308540  498165 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:18.308542  498165 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:18.308544  498165 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:18.308549  498165 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:18.308551  498165 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:18.308554  498165 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:18.308556  498165 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:18.308565  498165 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:18.308568  498165 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:18.308572  498165 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:18.308575  498165 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:18.308578  498165 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:18.308580  498165 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:18.308583  498165 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:18.308585  498165 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:18.308587  498165 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:18.308589  498165 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:18.308592  498165 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:18.308594  498165 cri.go:89] found id: ""
	I1027 21:56:18.308637  498165 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:18.322937  498165 out.go:203] 
	W1027 21:56:18.323934  498165 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:18.323977  498165 out.go:285] * 
	* 
	W1027 21:56:18.327456  498165 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:18.328548  498165 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-txrzm" [24503293-388b-4873-bc11-107a24f28f57] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003990045s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-681393 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681393 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (274.25683ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 21:56:06.081190  496602 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:56:06.081590  496602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:06.081602  496602 out.go:374] Setting ErrFile to fd 2...
	I1027 21:56:06.081608  496602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:56:06.081910  496602 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:56:06.082267  496602 mustload.go:66] Loading cluster: addons-681393
	I1027 21:56:06.082710  496602 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:06.082728  496602 addons.go:606] checking whether the cluster is paused
	I1027 21:56:06.082852  496602 config.go:182] Loaded profile config "addons-681393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 21:56:06.082867  496602 host.go:66] Checking if "addons-681393" exists ...
	I1027 21:56:06.083408  496602 cli_runner.go:164] Run: docker container inspect addons-681393 --format={{.State.Status}}
	I1027 21:56:06.104982  496602 ssh_runner.go:195] Run: systemctl --version
	I1027 21:56:06.105049  496602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-681393
	I1027 21:56:06.124797  496602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/addons-681393/id_rsa Username:docker}
	I1027 21:56:06.225176  496602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 21:56:06.225293  496602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 21:56:06.259766  496602 cri.go:89] found id: "2010575178c32d016eb793c69343589f82cb1304c5a5049e81533293b665846a"
	I1027 21:56:06.259794  496602 cri.go:89] found id: "59650918c62fb587ee2a49199ae15c64f6f3719a8fa3e77d6fa3ea4b87c78d96"
	I1027 21:56:06.259808  496602 cri.go:89] found id: "85ee742586776197ee310692c6a200f779e8db758305d55a7125def35eb872d0"
	I1027 21:56:06.259814  496602 cri.go:89] found id: "f24d2cb4a2b58dc2bc24f42e827751b12c73ae91910a737374f02dd05cdd70e4"
	I1027 21:56:06.259818  496602 cri.go:89] found id: "6467e0e7a8c5ba17505098a059c58d32a6ee545e769e5c916c99054a29ff94b6"
	I1027 21:56:06.259824  496602 cri.go:89] found id: "f5f70b0c5ec76a2853295145878265aa008fe0cbe77013ff63408e80d2427310"
	I1027 21:56:06.259828  496602 cri.go:89] found id: "9f32528dcb836d800baf31c29a504157909c9aeb4fd939a72e8cfba3065149f7"
	I1027 21:56:06.259832  496602 cri.go:89] found id: "153647beb159431c08c90480b877fa98f2bb060c320d6c2828042131e3659147"
	I1027 21:56:06.259836  496602 cri.go:89] found id: "5ddf0325ff467794c9d1abb8c5f60eb6c98bac477b47e36bd5cb7276fec1c305"
	I1027 21:56:06.259854  496602 cri.go:89] found id: "b847234d4f511c8dfe654ee171e250c03a5d67023a74028021aa37c13e72928d"
	I1027 21:56:06.259863  496602 cri.go:89] found id: "4b58171ccaea03a0d305a358c903604753b3af97962b2b977294191045cc1b45"
	I1027 21:56:06.259868  496602 cri.go:89] found id: "f55e91ef28796f9b478b3bad5606a95bd6ffff37c1610987eca6ab253783f719"
	I1027 21:56:06.259872  496602 cri.go:89] found id: "0a08d08180b3cde5f0b89fc6425298c07ab8a523c9263c32b212cad709f28396"
	I1027 21:56:06.259877  496602 cri.go:89] found id: "fb54ab1a61dadc7e0de5c7aa80434eb5e6337187fec7e8acf6e4e2f7fabb5b6b"
	I1027 21:56:06.259882  496602 cri.go:89] found id: "00cc26010baa4c5349e5801ce6c907937fb29b46152c7bf38ab7771ae1b654b5"
	I1027 21:56:06.259891  496602 cri.go:89] found id: "37c2044b18ebd60ae9fc96187fa56ebff13693ac7f2b692f628abe6b41ded249"
	I1027 21:56:06.259899  496602 cri.go:89] found id: "49d0fe83e58c6a053146da8a650240933c8d93672eb4ec4bcd43edabe2bb3dbf"
	I1027 21:56:06.259905  496602 cri.go:89] found id: "bd12cfcd642316f81e332de0d2775ae8eaf95525e8f25908cea48eea9164f30d"
	I1027 21:56:06.259909  496602 cri.go:89] found id: "27e7e3974588987122b7cf914771da60c28383b2f050973614bf8274cc72cf12"
	I1027 21:56:06.259914  496602 cri.go:89] found id: "65ad03529a586a8ebad96273d7e58e641735ce0c4f485e3fed071dea0a819f88"
	I1027 21:56:06.259918  496602 cri.go:89] found id: "768d42a191bfa1082896ed54df7ad99263daeed329af2ff4eb903731e3228a74"
	I1027 21:56:06.259922  496602 cri.go:89] found id: "9ca7e0d969e10595ad0d4c5a3fae0232a2ae25da6e9a0f766cd0c419aa6b5b10"
	I1027 21:56:06.259926  496602 cri.go:89] found id: "c7060ff5377698d09082e25346637f6b6876721ce9f993c71c38626621272267"
	I1027 21:56:06.259930  496602 cri.go:89] found id: "6924a158f2354ba990c7c1691b24f083acabe55af22408dd37de0de9a5219567"
	I1027 21:56:06.259935  496602 cri.go:89] found id: ""
	I1027 21:56:06.259999  496602 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 21:56:06.275043  496602 out.go:203] 
	W1027 21:56:06.276061  496602 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:56:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 21:56:06.276085  496602 out.go:285] * 
	* 
	W1027 21:56:06.279249  496602 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 21:56:06.280338  496602 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-681393 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-287960 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-287960 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-nnrh7" [7ea20a74-55af-40b9-a8d1-4680e8f53d9a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-287960 -n functional-287960
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-27 22:12:09.928571089 +0000 UTC m=+1145.263498427
functional_test.go:1645: (dbg) Run:  kubectl --context functional-287960 describe po hello-node-connect-7d85dfc575-nnrh7 -n default
functional_test.go:1645: (dbg) kubectl --context functional-287960 describe po hello-node-connect-7d85dfc575-nnrh7 -n default:
Name:             hello-node-connect-7d85dfc575-nnrh7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-287960/192.168.49.2
Start Time:       Mon, 27 Oct 2025 22:02:09 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-srs45 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-srs45:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nnrh7 to functional-287960
Normal   Pulling    6m49s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m49s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m49s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-287960 logs hello-node-connect-7d85dfc575-nnrh7 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-287960 logs hello-node-connect-7d85dfc575-nnrh7 -n default: exit status 1 (62.069493ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-nnrh7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-287960 logs hello-node-connect-7d85dfc575-nnrh7 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-287960 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-nnrh7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-287960/192.168.49.2
Start Time:       Mon, 27 Oct 2025 22:02:09 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-srs45 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-srs45:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nnrh7 to functional-287960
Normal   Pulling    6m50s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m50s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m50s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-287960 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-287960 logs -l app=hello-node-connect: exit status 1 (64.338407ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-nnrh7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-287960 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-287960 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.80.247
IPs:                      10.108.80.247
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30928/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-287960
helpers_test.go:243: (dbg) docker inspect functional-287960:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bbda0a423cf71ff462b906f586194521bad8f602dc522d6f6367df06b507c722",
	        "Created": "2025-10-27T22:00:07.806690707Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:00:07.84202598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/bbda0a423cf71ff462b906f586194521bad8f602dc522d6f6367df06b507c722/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bbda0a423cf71ff462b906f586194521bad8f602dc522d6f6367df06b507c722/hostname",
	        "HostsPath": "/var/lib/docker/containers/bbda0a423cf71ff462b906f586194521bad8f602dc522d6f6367df06b507c722/hosts",
	        "LogPath": "/var/lib/docker/containers/bbda0a423cf71ff462b906f586194521bad8f602dc522d6f6367df06b507c722/bbda0a423cf71ff462b906f586194521bad8f602dc522d6f6367df06b507c722-json.log",
	        "Name": "/functional-287960",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-287960:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-287960",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bbda0a423cf71ff462b906f586194521bad8f602dc522d6f6367df06b507c722",
	                "LowerDir": "/var/lib/docker/overlay2/d6746fe42a9380f81348fd0f78a9244be22909fdbfd75d740bcfd7e94d13f000-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d6746fe42a9380f81348fd0f78a9244be22909fdbfd75d740bcfd7e94d13f000/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d6746fe42a9380f81348fd0f78a9244be22909fdbfd75d740bcfd7e94d13f000/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d6746fe42a9380f81348fd0f78a9244be22909fdbfd75d740bcfd7e94d13f000/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-287960",
	                "Source": "/var/lib/docker/volumes/functional-287960/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-287960",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-287960",
	                "name.minikube.sigs.k8s.io": "functional-287960",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c0f78706febcb5ab3634a875e8e7fe2577a88d8dd6fd8600211f5457b41b411c",
	            "SandboxKey": "/var/run/docker/netns/c0f78706febc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-287960": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:6f:b9:92:89:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "753f4194de8652fc4c87abdc876f35d7810f0331a0d72197806017ebc1857094",
	                    "EndpointID": "c76044b9af098c3006c1f1e5067545031e55be4d15fe70d64ffe304d2d4a7f85",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-287960",
	                        "bbda0a423cf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-287960 -n functional-287960
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-287960 logs -n 25: (1.308199127s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-287960 image save --daemon kicbase/echo-server:functional-287960 --alsologtostderr          │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ addons         │ functional-287960 addons list                                                                          │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ addons         │ functional-287960 addons list -o json                                                                  │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ ssh            │ functional-287960 ssh sudo cat /etc/test/nested/copy/485668/hosts                                      │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ ssh            │ functional-287960 ssh sudo cat /etc/ssl/certs/485668.pem                                               │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ ssh            │ functional-287960 ssh sudo cat /usr/share/ca-certificates/485668.pem                                   │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ ssh            │ functional-287960 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ ssh            │ functional-287960 ssh sudo cat /etc/ssl/certs/4856682.pem                                              │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ ssh            │ functional-287960 ssh sudo cat /usr/share/ca-certificates/4856682.pem                                  │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ ssh            │ functional-287960 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ image          │ functional-287960 image ls --format short --alsologtostderr                                            │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ image          │ functional-287960 image ls --format json --alsologtostderr                                             │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ image          │ functional-287960 image ls --format table --alsologtostderr                                            │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ image          │ functional-287960 image ls --format yaml --alsologtostderr                                             │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ ssh            │ functional-287960 ssh pgrep buildkitd                                                                  │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │                     │
	│ image          │ functional-287960 image build -t localhost/my-image:functional-287960 testdata/build --alsologtostderr │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ image          │ functional-287960 image ls                                                                             │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ update-context │ functional-287960 update-context --alsologtostderr -v=2                                                │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ update-context │ functional-287960 update-context --alsologtostderr -v=2                                                │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ update-context │ functional-287960 update-context --alsologtostderr -v=2                                                │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:02 UTC │ 27 Oct 25 22:02 UTC │
	│ service        │ functional-287960 service list                                                                         │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:11 UTC │ 27 Oct 25 22:11 UTC │
	│ service        │ functional-287960 service list -o json                                                                 │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:11 UTC │ 27 Oct 25 22:11 UTC │
	│ service        │ functional-287960 service --namespace=default --https --url hello-node                                 │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:11 UTC │                     │
	│ service        │ functional-287960 service hello-node --url --format={{.IP}}                                            │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:11 UTC │                     │
	│ service        │ functional-287960 service hello-node --url                                                             │ functional-287960 │ jenkins │ v1.37.0 │ 27 Oct 25 22:11 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:01:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:01:48.775475  520039 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:01:48.775781  520039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:01:48.775796  520039 out.go:374] Setting ErrFile to fd 2...
	I1027 22:01:48.775804  520039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:01:48.776075  520039 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:01:48.776647  520039 out.go:368] Setting JSON to false
	I1027 22:01:48.777731  520039 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6248,"bootTime":1761596261,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:01:48.777837  520039 start.go:143] virtualization: kvm guest
	I1027 22:01:48.779262  520039 out.go:179] * [functional-287960] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:01:48.780365  520039 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:01:48.780369  520039 notify.go:221] Checking for updates...
	I1027 22:01:48.782374  520039 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:01:48.783528  520039 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:01:48.784508  520039 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:01:48.785449  520039 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:01:48.786388  520039 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:01:48.787579  520039 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:01:48.788079  520039 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:01:48.814375  520039 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:01:48.814451  520039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:01:48.876237  520039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-27 22:01:48.864624446 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:01:48.876369  520039 docker.go:318] overlay module found
	I1027 22:01:48.878449  520039 out.go:179] * Using the docker driver based on existing profile
	I1027 22:01:48.879322  520039 start.go:307] selected driver: docker
	I1027 22:01:48.879342  520039 start.go:928] validating driver "docker" against &{Name:functional-287960 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-287960 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:01:48.879463  520039 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:01:48.879565  520039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:01:48.945893  520039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-27 22:01:48.935119227 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:01:48.946790  520039 cni.go:84] Creating CNI manager for ""
	I1027 22:01:48.946909  520039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:01:48.946992  520039 start.go:351] cluster config:
	{Name:functional-287960 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-287960 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:01:48.949044  520039 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 27 22:02:18 functional-287960 crio[3570]: time="2025-10-27T22:02:18.633389717Z" level=info msg="Started container" PID=7438 containerID=a097c04fe02baf997ae712505968d659d3d3e36e31214bc6e24f17bf87ff730f description=default/mysql-5bb876957f-qr9vk/mysql id=05bb45eb-a7e4-42c4-8dd2-3025026c234f name=/runtime.v1.RuntimeService/StartContainer sandboxID=86335f73db1ecb8ff0d1397696e1c3084c1e014603547c7ad367e58ad2f4aef1
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.507069725Z" level=info msg="Stopping pod sandbox: 52bfc07e49c94e6c4d80c63e9803f791f94845a5cf0cb2cf6a245e7bbad27b59" id=66d032cb-c1cd-408e-b21b-544ab3cdd852 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.507129126Z" level=info msg="Stopped pod sandbox (already stopped): 52bfc07e49c94e6c4d80c63e9803f791f94845a5cf0cb2cf6a245e7bbad27b59" id=66d032cb-c1cd-408e-b21b-544ab3cdd852 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.507606668Z" level=info msg="Removing pod sandbox: 52bfc07e49c94e6c4d80c63e9803f791f94845a5cf0cb2cf6a245e7bbad27b59" id=3fcf8982-1c93-4e9c-883e-c80554e15414 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.51040461Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.510466756Z" level=info msg="Removed pod sandbox: 52bfc07e49c94e6c4d80c63e9803f791f94845a5cf0cb2cf6a245e7bbad27b59" id=3fcf8982-1c93-4e9c-883e-c80554e15414 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.510883608Z" level=info msg="Stopping pod sandbox: 5728dd8a59f2240ae7608d1d6a0fcbbdcaf0a215d01cd0f33e56d25f7f4814d5" id=f0fbd7e4-c093-4472-8bf2-47056e22e3b4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.510922629Z" level=info msg="Stopped pod sandbox (already stopped): 5728dd8a59f2240ae7608d1d6a0fcbbdcaf0a215d01cd0f33e56d25f7f4814d5" id=f0fbd7e4-c093-4472-8bf2-47056e22e3b4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.511227452Z" level=info msg="Removing pod sandbox: 5728dd8a59f2240ae7608d1d6a0fcbbdcaf0a215d01cd0f33e56d25f7f4814d5" id=2a49a3e5-7b5c-47d9-ab1a-16ff8f7ca28c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.513520754Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.513580785Z" level=info msg="Removed pod sandbox: 5728dd8a59f2240ae7608d1d6a0fcbbdcaf0a215d01cd0f33e56d25f7f4814d5" id=2a49a3e5-7b5c-47d9-ab1a-16ff8f7ca28c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.513915949Z" level=info msg="Stopping pod sandbox: 8f24e3b84f9d3591d9b132c1e7fb25c1b913d5dc741d3fd00ad3718b39d97b28" id=0bbf450b-6a5c-4364-affe-1d7b7b3b4168 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.513962923Z" level=info msg="Stopped pod sandbox (already stopped): 8f24e3b84f9d3591d9b132c1e7fb25c1b913d5dc741d3fd00ad3718b39d97b28" id=0bbf450b-6a5c-4364-affe-1d7b7b3b4168 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.514290972Z" level=info msg="Removing pod sandbox: 8f24e3b84f9d3591d9b132c1e7fb25c1b913d5dc741d3fd00ad3718b39d97b28" id=f6726709-e5f9-439f-8f82-d1b60a2c2057 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.518266497Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:02:20 functional-287960 crio[3570]: time="2025-10-27T22:02:20.518329402Z" level=info msg="Removed pod sandbox: 8f24e3b84f9d3591d9b132c1e7fb25c1b913d5dc741d3fd00ad3718b39d97b28" id=f6726709-e5f9-439f-8f82-d1b60a2c2057 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:02:26 functional-287960 crio[3570]: time="2025-10-27T22:02:26.500432046Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ac66eebd-1fde-4eb4-900a-226898269c88 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:02:28 functional-287960 crio[3570]: time="2025-10-27T22:02:28.500577464Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=66905c5e-541b-45cb-9c26-02d94d835751 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:02:56 functional-287960 crio[3570]: time="2025-10-27T22:02:56.500370859Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a5370a3e-f7f7-4809-9c45-d0994528a3ba name=/runtime.v1.ImageService/PullImage
	Oct 27 22:03:16 functional-287960 crio[3570]: time="2025-10-27T22:03:16.500636363Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=71000db3-fcc3-4d37-9016-72b336107cf7 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:03:46 functional-287960 crio[3570]: time="2025-10-27T22:03:46.500234841Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ab4eaaaa-a3fa-4f6c-a179-26d0b32d4b3a name=/runtime.v1.ImageService/PullImage
	Oct 27 22:04:46 functional-287960 crio[3570]: time="2025-10-27T22:04:46.500541707Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3cec61e1-ebbe-4ec3-b21e-52a44cd8fc8d name=/runtime.v1.ImageService/PullImage
	Oct 27 22:05:20 functional-287960 crio[3570]: time="2025-10-27T22:05:20.500186683Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fa94e72e-ceba-40c2-85e0-1702cc828285 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:07:34 functional-287960 crio[3570]: time="2025-10-27T22:07:34.50056151Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e7616fa8-24f7-41dc-a05f-cdc53b1764cf name=/runtime.v1.ImageService/PullImage
	Oct 27 22:08:04 functional-287960 crio[3570]: time="2025-10-27T22:08:04.500321414Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=25ac1181-d94e-457c-ab21-c201bfaefeab name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a097c04fe02ba       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   86335f73db1ec       mysql-5bb876957f-qr9vk                       default
	244e0a51ca015       docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8                  10 minutes ago      Running             myfrontend                  0                   4de81f0891b33       sp-pod                                       default
	7f41f163b70cd       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   56723f68dbbde       nginx-svc                                    default
	c3b60ea737c03       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         10 minutes ago      Running             kubernetes-dashboard        0                   66e24c525770a       kubernetes-dashboard-855c9754f9-9s4d9        kubernetes-dashboard
	f5252c9d1ab76       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   2a6ce584a9b86       dashboard-metrics-scraper-77bf4d6c4c-sxv2x   kubernetes-dashboard
	0627b546cb97b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   e19a96f2dbb59       busybox-mount                                default
	4f47fc568f25b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   180252f9ce91d       storage-provisioner                          kube-system
	05721621de6f4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   a1fda3226cddd       kube-apiserver-functional-287960             kube-system
	146aada72e841       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   4fe6eaef2d069       kube-scheduler-functional-287960             kube-system
	e983cd43aa55c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   c0ecae5841484       kube-controller-manager-functional-287960    kube-system
	6faa09bc9adbd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   2ea5fcae6dd3c       etcd-functional-287960                       kube-system
	127e1140cdedf       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     1                   c0ecae5841484       kube-controller-manager-functional-287960    kube-system
	ba9bc471a58f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   180252f9ce91d       storage-provisioner                          kube-system
	d6b20a79d854f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   dd6f6ad231095       kube-proxy-lwfl2                             kube-system
	ac85c29c62f82       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   d0eb143aeff0e       coredns-66bc5c9577-72r6r                     kube-system
	455dd8875a259       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   18d9e56af2286       kindnet-wsw2x                                kube-system
	bcef905af0aac       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   d0eb143aeff0e       coredns-66bc5c9577-72r6r                     kube-system
	fe40a49839ee9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   18d9e56af2286       kindnet-wsw2x                                kube-system
	1f4efdb8e88cf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   dd6f6ad231095       kube-proxy-lwfl2                             kube-system
	55c17ba0382b5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   4fe6eaef2d069       kube-scheduler-functional-287960             kube-system
	1f87f61d1e0c9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   2ea5fcae6dd3c       etcd-functional-287960                       kube-system
	
	
	==> coredns [ac85c29c62f82dd18e78924940ebe0ef298419e006f75021b2e1081a6741656d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33474 - 7111 "HINFO IN 1962936949903784878.151520563075028423. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.025291628s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [bcef905af0aacd7aff4b0b88f81666fbed6544270dceca9f71c269dd9f6ce25e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36973 - 36748 "HINFO IN 5563654843479725546.403088129659821532. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027571167s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-287960
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-287960
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=functional-287960
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_00_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:00:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-287960
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:12:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:09:52 +0000   Mon, 27 Oct 2025 22:00:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:09:52 +0000   Mon, 27 Oct 2025 22:00:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:09:52 +0000   Mon, 27 Oct 2025 22:00:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:09:52 +0000   Mon, 27 Oct 2025 22:00:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-287960
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                616214f4-d3f5-4e02-8d1c-45a76c4f490a
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-f9h87                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-nnrh7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-qr9vk                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-72r6r                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-287960                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-wsw2x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-287960              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-287960     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-lwfl2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-287960              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-sxv2x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9s4d9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-287960 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-287960 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-287960 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-287960 event: Registered Node functional-287960 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-287960 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-287960 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-287960 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-287960 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-287960 event: Registered Node functional-287960 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [1f87f61d1e0c98c7feb2e8aeb85ab89fba310001a4a722a4d487b81ff728fdcc] <==
	{"level":"warn","ts":"2025-10-27T22:00:22.108629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:00:22.115581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:00:22.122761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:00:22.140964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:00:22.148995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:00:22.157213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:00:22.213296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35328","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T22:01:18.632666Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T22:01:18.632783Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-287960","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-27T22:01:18.632874Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T22:01:18.634453Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T22:01:18.634536Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:01:18.634556Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-27T22:01:18.634607Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-27T22:01:18.634621Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T22:01:18.634667Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T22:01:18.634703Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T22:01:18.634720Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T22:01:18.634625Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T22:01:18.634741Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T22:01:18.634751Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:01:18.636073Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-27T22:01:18.636128Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:01:18.636152Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-27T22:01:18.636158Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-287960","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [6faa09bc9adbd2008d612d38ee48585257d8ae977fb120e703ebd3b586f938a7] <==
	{"level":"warn","ts":"2025-10-27T22:01:21.914604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.922666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.929443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.936515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.943133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.950980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.960173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.967776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.974697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.983326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.992256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:21.999733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:22.006344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:22.013701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:22.021602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:22.027862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:22.034163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:22.041395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:22.059037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:22.065148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:22.071452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:01:22.120864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60488","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T22:11:21.591541Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1110}
	{"level":"info","ts":"2025-10-27T22:11:21.610209Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1110,"took":"18.341548ms","hash":2884397670,"current-db-size-bytes":3448832,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-27T22:11:21.610261Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2884397670,"revision":1110,"compact-revision":-1}
	
	
	==> kernel <==
	 22:12:11 up  1:54,  0 user,  load average: 0.33, 0.28, 6.59
	Linux functional-287960 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [455dd8875a259d1e8fb9f6482fecad58a8d96411708798b043cd5de3270d70ca] <==
	I1027 22:10:08.456631       1 main.go:301] handling current node
	I1027 22:10:18.459397       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:10:18.459448       1 main.go:301] handling current node
	I1027 22:10:28.457127       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:10:28.457169       1 main.go:301] handling current node
	I1027 22:10:38.459441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:10:38.459476       1 main.go:301] handling current node
	I1027 22:10:48.459067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:10:48.459099       1 main.go:301] handling current node
	I1027 22:10:58.457289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:10:58.457326       1 main.go:301] handling current node
	I1027 22:11:08.464376       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:11:08.464409       1 main.go:301] handling current node
	I1027 22:11:18.458729       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:11:18.458782       1 main.go:301] handling current node
	I1027 22:11:28.456985       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:11:28.457025       1 main.go:301] handling current node
	I1027 22:11:38.461963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:11:38.461996       1 main.go:301] handling current node
	I1027 22:11:48.459014       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:11:48.459045       1 main.go:301] handling current node
	I1027 22:11:58.455514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:11:58.455559       1 main.go:301] handling current node
	I1027 22:12:08.455822       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:12:08.455886       1 main.go:301] handling current node
	
	
	==> kindnet [fe40a49839ee9507a3a213fb470fbce35fe6690c2ad7b85b1e1eb12ed5914d60] <==
	I1027 22:00:31.270396       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:00:31.270723       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1027 22:00:31.270863       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:00:31.270879       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:00:31.270902       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:00:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:00:31.565853       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:00:31.566075       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:00:31.566102       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:00:31.566455       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:00:31.966304       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:00:31.966340       1 metrics.go:72] Registering metrics
	I1027 22:00:31.966433       1 controller.go:711] "Syncing nftables rules"
	I1027 22:00:41.481761       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:00:41.481853       1 main.go:301] handling current node
	I1027 22:00:51.484849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:00:51.484898       1 main.go:301] handling current node
	I1027 22:01:01.483145       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:01:01.483193       1 main.go:301] handling current node
	
	
	==> kube-apiserver [05721621de6f44547e2d51d9a211d6217caf8c3712943e08c6b4327635fab3ec] <==
	I1027 22:01:22.624114       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:01:23.502893       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:01:23.531717       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1027 22:01:23.807932       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1027 22:01:23.809587       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:01:23.814589       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:01:24.363814       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:01:24.462782       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:01:24.514432       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:01:24.521079       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:01:30.712710       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:01:41.808697       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.33.2"}
	I1027 22:01:45.784401       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.32.146"}
	I1027 22:01:49.814115       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:01:49.906402       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.4.45"}
	I1027 22:01:49.932274       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.34.33"}
	I1027 22:02:00.139458       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.224.12"}
	E1027 22:02:08.089749       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51312: use of closed network connection
	I1027 22:02:09.601329       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.80.247"}
	I1027 22:02:10.441209       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.242.28"}
	E1027 22:02:15.945547       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35764: use of closed network connection
	E1027 22:02:24.564385       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35778: use of closed network connection
	E1027 22:02:25.971849       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39978: use of closed network connection
	E1027 22:02:27.336677       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40000: use of closed network connection
	I1027 22:11:22.530645       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [127e1140cdedf73f1d9ccf534f939f26aab80522548258800baf4a1f61f3b9f4] <==
	I1027 22:01:09.745760       1 serving.go:386] Generated self-signed cert in-memory
	I1027 22:01:10.297501       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1027 22:01:10.297525       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:01:10.298922       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1027 22:01:10.298922       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1027 22:01:10.299334       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1027 22:01:10.299360       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1027 22:01:20.301171       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [e983cd43aa55cb7de90f43da52d9f95286052c30c0edb106cc8ac3847898f693] <==
	I1027 22:01:25.933562       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 22:01:25.933599       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:01:25.933634       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 22:01:25.933690       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:01:25.933693       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 22:01:25.933817       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:01:25.934010       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 22:01:25.934314       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:01:25.934405       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:01:25.934516       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:01:25.934653       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-287960"
	I1027 22:01:25.934703       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 22:01:25.934802       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 22:01:25.937617       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:01:25.938779       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:01:25.939853       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 22:01:25.942577       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 22:01:25.945214       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:01:25.959279       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 22:01:49.857166       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 22:01:49.861004       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 22:01:49.864171       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 22:01:49.866354       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 22:01:49.868134       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 22:01:49.872658       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [1f4efdb8e88cf7853f13c3f94d4eac3430a020da4ba7bf6a0c6a8abd0ab5de42] <==
	I1027 22:00:31.140149       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:00:31.211970       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:00:31.312541       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:00:31.312579       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 22:00:31.312676       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:00:31.332506       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:00:31.332581       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:00:31.337974       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:00:31.338466       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:00:31.338508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:00:31.339921       1 config.go:200] "Starting service config controller"
	I1027 22:00:31.339965       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:00:31.339988       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:00:31.340366       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:00:31.340509       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:00:31.340530       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:00:31.340597       1 config.go:309] "Starting node config controller"
	I1027 22:00:31.340613       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:00:31.340621       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:00:31.440485       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:00:31.440592       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:00:31.440629       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d6b20a79d854fed11e755bc7acf0af921e7cae982268e1a3ae6c97ed3ea32a20] <==
	I1027 22:01:08.154737       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1027 22:01:08.155714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-287960&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:01:09.212744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-287960&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:01:12.292907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-287960&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:01:17.604738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-287960&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1027 22:01:28.555694       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:01:28.555730       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 22:01:28.555809       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:01:28.575223       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:01:28.575299       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:01:28.580812       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:01:28.581120       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:01:28.581155       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:01:28.582700       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:01:28.582738       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:01:28.582794       1 config.go:200] "Starting service config controller"
	I1027 22:01:28.582805       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:01:28.582819       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:01:28.582825       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:01:28.582829       1 config.go:309] "Starting node config controller"
	I1027 22:01:28.582851       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:01:28.682995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:01:28.682998       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:01:28.683044       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:01:28.683009       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [146aada72e841d93e2eff38d0356dda7681a664ad11d1dd4a38a088e0d41bbfc] <==
	I1027 22:01:21.506222       1 serving.go:386] Generated self-signed cert in-memory
	W1027 22:01:22.508404       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:01:22.508437       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:01:22.508448       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:01:22.508457       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:01:22.535268       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:01:22.535301       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:01:22.538386       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:01:22.538440       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:01:22.538743       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:01:22.538812       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:01:22.639544       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [55c17ba0382b58950688d0332a10f3a01a2f5ffd43dc96435f004c23dfde2e82] <==
	E1027 22:00:22.657444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:00:22.657459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:00:22.657647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:00:22.657657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:00:22.658146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:00:22.658190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:00:23.469007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:00:23.499375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 22:00:23.612010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:00:23.645258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:00:23.645258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:00:23.779717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:00:23.786967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:00:23.825084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:00:23.843457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:00:23.845504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:00:23.875979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:00:23.918257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1027 22:00:25.353611       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:01:18.524026       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:01:18.524067       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 22:01:18.524163       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 22:01:18.524188       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 22:01:18.524240       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 22:01:18.524271       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 27 22:09:31 functional-287960 kubelet[4112]: E1027 22:09:31.500433    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:09:35 functional-287960 kubelet[4112]: E1027 22:09:35.500354    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:09:45 functional-287960 kubelet[4112]: E1027 22:09:45.500152    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:09:49 functional-287960 kubelet[4112]: E1027 22:09:49.500215    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:10:00 functional-287960 kubelet[4112]: E1027 22:10:00.500815    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:10:01 functional-287960 kubelet[4112]: E1027 22:10:01.500453    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:10:11 functional-287960 kubelet[4112]: E1027 22:10:11.500026    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:10:16 functional-287960 kubelet[4112]: E1027 22:10:16.500656    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:10:24 functional-287960 kubelet[4112]: E1027 22:10:24.499815    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:10:27 functional-287960 kubelet[4112]: E1027 22:10:27.499595    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:10:38 functional-287960 kubelet[4112]: E1027 22:10:38.499506    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:10:41 functional-287960 kubelet[4112]: E1027 22:10:41.500062    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:10:50 functional-287960 kubelet[4112]: E1027 22:10:50.500742    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:10:54 functional-287960 kubelet[4112]: E1027 22:10:54.499965    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:11:02 functional-287960 kubelet[4112]: E1027 22:11:02.500235    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:11:08 functional-287960 kubelet[4112]: E1027 22:11:08.500284    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:11:15 functional-287960 kubelet[4112]: E1027 22:11:15.499626    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:11:22 functional-287960 kubelet[4112]: E1027 22:11:22.500114    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:11:30 functional-287960 kubelet[4112]: E1027 22:11:30.500372    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:11:33 functional-287960 kubelet[4112]: E1027 22:11:33.500039    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:11:45 functional-287960 kubelet[4112]: E1027 22:11:45.500191    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:11:46 functional-287960 kubelet[4112]: E1027 22:11:46.499908    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:11:58 functional-287960 kubelet[4112]: E1027 22:11:58.502092    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	Oct 27 22:12:00 functional-287960 kubelet[4112]: E1027 22:12:00.500206    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nnrh7" podUID="7ea20a74-55af-40b9-a8d1-4680e8f53d9a"
	Oct 27 22:12:09 functional-287960 kubelet[4112]: E1027 22:12:09.500113    4112 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f9h87" podUID="94a9efb1-1c7a-41b8-b8bd-983e5f31f882"
	
	
	==> kubernetes-dashboard [c3b60ea737c034368c172dab81e445d9ff033d8f03cd35ec75c32795f67a2e82] <==
	2025/10/27 22:01:58 Starting overwatch
	2025/10/27 22:01:58 Using namespace: kubernetes-dashboard
	2025/10/27 22:01:58 Using in-cluster config to connect to apiserver
	2025/10/27 22:01:58 Using secret token for csrf signing
	2025/10/27 22:01:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 22:01:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 22:01:58 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 22:01:58 Generating JWE encryption key
	2025/10/27 22:01:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 22:01:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 22:01:58 Initializing JWE encryption key from synchronized object
	2025/10/27 22:01:58 Creating in-cluster Sidecar client
	2025/10/27 22:01:58 Successful request to sidecar
	2025/10/27 22:01:58 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [4f47fc568f25b0f4e24c0a3147332ced406f22e2030439adab9a00b99e2318fa] <==
	W1027 22:11:47.681695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:49.685262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:49.689319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:51.692483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:51.696512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:53.699297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:53.704324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:55.707852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:55.712138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:57.715211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:57.719857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:59.722798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:11:59.727457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:01.730472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:01.734827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:03.737651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:03.741597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:05.744062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:05.747429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:07.752359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:07.756181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:09.759781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:09.764652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:11.767917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:12:11.772502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ba9bc471a58f5f393a8df1078538c853e7e26deac89eda37403c7faa64adf469] <==
	I1027 22:01:08.067692       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 22:01:08.071210       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-287960 -n functional-287960
helpers_test.go:269: (dbg) Run:  kubectl --context functional-287960 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-f9h87 hello-node-connect-7d85dfc575-nnrh7
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-287960 describe pod busybox-mount hello-node-75c85bcc94-f9h87 hello-node-connect-7d85dfc575-nnrh7
helpers_test.go:290: (dbg) kubectl --context functional-287960 describe pod busybox-mount hello-node-75c85bcc94-f9h87 hello-node-connect-7d85dfc575-nnrh7:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-287960/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 22:01:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://0627b546cb97b2103faa6c740e61163a01fd07ba8e416a6c6e664bf357d21133
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 27 Oct 2025 22:01:52 +0000
	      Finished:     Mon, 27 Oct 2025 22:01:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m5qdk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-m5qdk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-287960
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.587s (2.587s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-f9h87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-287960/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 22:01:45 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zl82 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7zl82:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-f9h87 to functional-287960
	  Normal   Pulling    7m26s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m26s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m26s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    14s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     14s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-nnrh7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-287960/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 22:02:09 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-srs45 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-srs45:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nnrh7 to functional-287960
	  Normal   Pulling    6m52s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m52s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m52s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-287960 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-287960 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-f9h87" [94a9efb1-1c7a-41b8-b8bd-983e5f31f882] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-287960 -n functional-287960
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-27 22:11:46.119060446 +0000 UTC m=+1121.453987796
functional_test.go:1460: (dbg) Run:  kubectl --context functional-287960 describe po hello-node-75c85bcc94-f9h87 -n default
functional_test.go:1460: (dbg) kubectl --context functional-287960 describe po hello-node-75c85bcc94-f9h87 -n default:
Name:             hello-node-75c85bcc94-f9h87
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-287960/192.168.49.2
Start Time:       Mon, 27 Oct 2025 22:01:45 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zl82 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7zl82:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-f9h87 to functional-287960
Normal   Pulling    7m (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-287960 logs hello-node-75c85bcc94-f9h87 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-287960 logs hello-node-75c85bcc94-f9h87 -n default: exit status 1 (66.507416ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-f9h87" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-287960 logs hello-node-75c85bcc94-f9h87 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image load --daemon kicbase/echo-server:functional-287960 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-287960" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image load --daemon kicbase/echo-server:functional-287960 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-287960" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-287960
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image load --daemon kicbase/echo-server:functional-287960 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-287960" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image save kicbase/echo-server:functional-287960 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1027 22:02:06.850260  524254 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:02:06.850409  524254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:02:06.850419  524254 out.go:374] Setting ErrFile to fd 2...
	I1027 22:02:06.850423  524254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:02:06.850607  524254 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:02:06.852071  524254 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:02:06.852240  524254 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:02:06.852668  524254 cli_runner.go:164] Run: docker container inspect functional-287960 --format={{.State.Status}}
	I1027 22:02:06.870118  524254 ssh_runner.go:195] Run: systemctl --version
	I1027 22:02:06.870173  524254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-287960
	I1027 22:02:06.886326  524254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/functional-287960/id_rsa Username:docker}
	I1027 22:02:06.983113  524254 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1027 22:02:06.983174  524254 cache_images.go:255] Failed to load cached images for "functional-287960": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1027 22:02:06.983197  524254 cache_images.go:267] failed pushing to: functional-287960

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-287960
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image save --daemon kicbase/echo-server:functional-287960 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-287960
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-287960: exit status 1 (17.862166ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-287960

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-287960

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 service --namespace=default --https --url hello-node: exit status 115 (542.439926ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31327
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-287960 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 service hello-node --url --format={{.IP}}: exit status 115 (543.748949ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-287960 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 service hello-node --url: exit status 115 (543.405698ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31327
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-287960 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31327
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.23s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-934737 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-934737 --output=json --user=testUser: exit status 80 (2.227134208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"002197e4-94a6-449e-9525-987ba42bdfc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-934737 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"21831f34-2926-47ce-bd7b-1adbe8e084a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-27T22:21:06Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"57ed713c-d014-4910-bcb9-f86818bbde7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-934737 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.87s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-934737 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-934737 --output=json --user=testUser: exit status 80 (1.872932729s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b52518ad-20a2-43b3-b0d6-80fe7bf7864a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-934737 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"fcca3abc-41c0-490e-8c10-0098f3473333","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-27T22:21:07Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"fba31c72-95c4-4eba-8e76-8e3e5bc1e3a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-934737 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.87s)

                                                
                                    
x
+
TestPause/serial/Pause (7.14s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-067652 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-067652 --alsologtostderr -v=5: exit status 80 (2.319043024s)

                                                
                                                
-- stdout --
	* Pausing node pause-067652 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:34:32.568052  672842 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:34:32.568327  672842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:34:32.568338  672842 out.go:374] Setting ErrFile to fd 2...
	I1027 22:34:32.568344  672842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:34:32.570405  672842 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:34:32.570899  672842 out.go:368] Setting JSON to false
	I1027 22:34:32.570972  672842 mustload.go:66] Loading cluster: pause-067652
	I1027 22:34:32.571381  672842 config.go:182] Loaded profile config "pause-067652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:34:32.571869  672842 cli_runner.go:164] Run: docker container inspect pause-067652 --format={{.State.Status}}
	I1027 22:34:32.609074  672842 host.go:66] Checking if "pause-067652" exists ...
	I1027 22:34:32.609462  672842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:34:32.705540  672842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:93 SystemTime:2025-10-27 22:34:32.691790679 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:34:32.706482  672842 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-067652 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 22:34:32.708838  672842 out.go:179] * Pausing node pause-067652 ... 
	I1027 22:34:32.710006  672842 host.go:66] Checking if "pause-067652" exists ...
	I1027 22:34:32.710419  672842 ssh_runner.go:195] Run: systemctl --version
	I1027 22:34:32.710478  672842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:32.738909  672842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:32.860231  672842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:34:32.884762  672842 pause.go:52] kubelet running: true
	I1027 22:34:32.884844  672842 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:34:33.060529  672842 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:34:33.060730  672842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:34:33.137311  672842 cri.go:89] found id: "a5b7b50d4040b9768126fbb779cf62f65a7546364bfa376d804072902a378d17"
	I1027 22:34:33.137343  672842 cri.go:89] found id: "4d25924c4df6aa14a77cfaec45632997d587987e4bcaec77378f5187ae9edcd7"
	I1027 22:34:33.137349  672842 cri.go:89] found id: "7e91149f673e2c10688e02752d115bb3adcae29b62e2fd2dc44183254068dcaa"
	I1027 22:34:33.137354  672842 cri.go:89] found id: "6eb92e57262ba23c48023890df53c802fa6236f0703a6780cc2f07abf8afe516"
	I1027 22:34:33.137358  672842 cri.go:89] found id: "81f2d85ae3519636b9310c3e124b232192aee71611336d7820879bc258ccf577"
	I1027 22:34:33.137362  672842 cri.go:89] found id: "c80661f95d0b8aa17deb1a4d771ca76f6e8d600e9608fcaf7bcdd6b3d302948e"
	I1027 22:34:33.137366  672842 cri.go:89] found id: "4c2ddd4a18261619f09936f5cf6470363cfbdd6270130ee2f201206e52c924e3"
	I1027 22:34:33.137371  672842 cri.go:89] found id: ""
	I1027 22:34:33.137420  672842 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:34:33.149903  672842 retry.go:31] will retry after 356.045215ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:34:33Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:34:33.506193  672842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:34:33.520430  672842 pause.go:52] kubelet running: false
	I1027 22:34:33.520488  672842 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:34:33.632635  672842 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:34:33.632708  672842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:34:33.705684  672842 cri.go:89] found id: "a5b7b50d4040b9768126fbb779cf62f65a7546364bfa376d804072902a378d17"
	I1027 22:34:33.705713  672842 cri.go:89] found id: "4d25924c4df6aa14a77cfaec45632997d587987e4bcaec77378f5187ae9edcd7"
	I1027 22:34:33.705719  672842 cri.go:89] found id: "7e91149f673e2c10688e02752d115bb3adcae29b62e2fd2dc44183254068dcaa"
	I1027 22:34:33.705725  672842 cri.go:89] found id: "6eb92e57262ba23c48023890df53c802fa6236f0703a6780cc2f07abf8afe516"
	I1027 22:34:33.705729  672842 cri.go:89] found id: "81f2d85ae3519636b9310c3e124b232192aee71611336d7820879bc258ccf577"
	I1027 22:34:33.705734  672842 cri.go:89] found id: "c80661f95d0b8aa17deb1a4d771ca76f6e8d600e9608fcaf7bcdd6b3d302948e"
	I1027 22:34:33.705738  672842 cri.go:89] found id: "4c2ddd4a18261619f09936f5cf6470363cfbdd6270130ee2f201206e52c924e3"
	I1027 22:34:33.705742  672842 cri.go:89] found id: ""
	I1027 22:34:33.705798  672842 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:34:33.719395  672842 retry.go:31] will retry after 500.475086ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:34:33Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:34:34.220146  672842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:34:34.237251  672842 pause.go:52] kubelet running: false
	I1027 22:34:34.237318  672842 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:34:34.367568  672842 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:34:34.367661  672842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:34:34.448564  672842 cri.go:89] found id: "a5b7b50d4040b9768126fbb779cf62f65a7546364bfa376d804072902a378d17"
	I1027 22:34:34.448587  672842 cri.go:89] found id: "4d25924c4df6aa14a77cfaec45632997d587987e4bcaec77378f5187ae9edcd7"
	I1027 22:34:34.448591  672842 cri.go:89] found id: "7e91149f673e2c10688e02752d115bb3adcae29b62e2fd2dc44183254068dcaa"
	I1027 22:34:34.448594  672842 cri.go:89] found id: "6eb92e57262ba23c48023890df53c802fa6236f0703a6780cc2f07abf8afe516"
	I1027 22:34:34.448598  672842 cri.go:89] found id: "81f2d85ae3519636b9310c3e124b232192aee71611336d7820879bc258ccf577"
	I1027 22:34:34.448600  672842 cri.go:89] found id: "c80661f95d0b8aa17deb1a4d771ca76f6e8d600e9608fcaf7bcdd6b3d302948e"
	I1027 22:34:34.448603  672842 cri.go:89] found id: "4c2ddd4a18261619f09936f5cf6470363cfbdd6270130ee2f201206e52c924e3"
	I1027 22:34:34.448605  672842 cri.go:89] found id: ""
	I1027 22:34:34.448648  672842 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:34:34.588110  672842 out.go:203] 
	W1027 22:34:34.700768  672842 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:34:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:34:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:34:34.700795  672842 out.go:285] * 
	* 
	W1027 22:34:34.705534  672842 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:34:34.784528  672842 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-067652 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-067652
helpers_test.go:243: (dbg) docker inspect pause-067652:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca",
	        "Created": "2025-10-27T22:33:48.085449702Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 657723,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:33:48.143775362Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca/hosts",
	        "LogPath": "/var/lib/docker/containers/8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca/8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca-json.log",
	        "Name": "/pause-067652",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-067652:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-067652",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca",
	                "LowerDir": "/var/lib/docker/overlay2/ab8c8f9a09c9d1a253b0dc61a821ab0c205705452a5f9f69707b341cdcbf8436-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ab8c8f9a09c9d1a253b0dc61a821ab0c205705452a5f9f69707b341cdcbf8436/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ab8c8f9a09c9d1a253b0dc61a821ab0c205705452a5f9f69707b341cdcbf8436/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ab8c8f9a09c9d1a253b0dc61a821ab0c205705452a5f9f69707b341cdcbf8436/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-067652",
	                "Source": "/var/lib/docker/volumes/pause-067652/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-067652",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-067652",
	                "name.minikube.sigs.k8s.io": "pause-067652",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "57f64ef647568f4c5e017af294834eeefd956b5419a72167502f408cadaa580f",
	            "SandboxKey": "/var/run/docker/netns/57f64ef64756",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-067652": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:26:63:c5:aa:53",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c9de812c0cccf26ebda388d884059189901c88208c87e7ef90581d800110902",
	                    "EndpointID": "c035eda68a293e41521334679799966f4c417fee48601687fb1b41280e0e7526",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-067652",
	                        "8d2b598050e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-067652 -n pause-067652
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-067652 -n pause-067652: exit status 2 (404.75433ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-067652 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-067652 logs -n 25: (1.772410755s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-589123 --schedule 5m                                                                                                   │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 5m                                                                                                   │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 5m                                                                                                   │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --cancel-scheduled                                                                                              │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │ 27 Oct 25 22:32 UTC │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │ 27 Oct 25 22:32 UTC │
	│ delete  │ -p scheduled-stop-589123                                                                                                                 │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:33 UTC │
	│ start   │ -p insufficient-storage-960733 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-960733 │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │                     │
	│ delete  │ -p insufficient-storage-960733                                                                                                           │ insufficient-storage-960733 │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:33 UTC │
	│ start   │ -p force-systemd-env-078908 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-078908    │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p pause-067652 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-067652                │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p offline-crio-037558 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-037558         │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p stopped-upgrade-126023 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-126023      │ jenkins │ v1.32.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:34 UTC │
	│ delete  │ -p force-systemd-env-078908                                                                                                              │ force-systemd-env-078908    │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p missing-upgrade-912550 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-912550      │ jenkins │ v1.32.0 │ 27 Oct 25 22:34 UTC │                     │
	│ stop    │ stopped-upgrade-126023 stop                                                                                                              │ stopped-upgrade-126023      │ jenkins │ v1.32.0 │ 27 Oct 25 22:34 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p stopped-upgrade-126023 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-126023      │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │                     │
	│ start   │ -p pause-067652 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-067652                │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │ 27 Oct 25 22:34 UTC │
	│ delete  │ -p offline-crio-037558                                                                                                                   │ offline-crio-037558         │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-695499   │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │                     │
	│ pause   │ -p pause-067652 --alsologtostderr -v=5                                                                                                   │ pause-067652                │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:34:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:34:29.248123  671436 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:34:29.248422  671436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:34:29.248432  671436 out.go:374] Setting ErrFile to fd 2...
	I1027 22:34:29.248436  671436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:34:29.248674  671436 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:34:29.249186  671436 out.go:368] Setting JSON to false
	I1027 22:34:29.250492  671436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8208,"bootTime":1761596261,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:34:29.250585  671436 start.go:143] virtualization: kvm guest
	I1027 22:34:29.252781  671436 out.go:179] * [kubernetes-upgrade-695499] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:34:29.253787  671436 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:34:29.253810  671436 notify.go:221] Checking for updates...
	I1027 22:34:29.256409  671436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:34:29.257503  671436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:34:29.258382  671436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:34:26.149286  669685 out.go:252] * Updating the running docker "pause-067652" container ...
	I1027 22:34:26.149325  669685 machine.go:94] provisionDockerMachine start ...
	I1027 22:34:26.149414  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:26.168233  669685 main.go:143] libmachine: Using SSH client type: native
	I1027 22:34:26.168536  669685 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1027 22:34:26.168553  669685 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:34:26.312409  669685 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-067652
	
	I1027 22:34:26.312486  669685 ubuntu.go:182] provisioning hostname "pause-067652"
	I1027 22:34:26.312575  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:26.333469  669685 main.go:143] libmachine: Using SSH client type: native
	I1027 22:34:26.333712  669685 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1027 22:34:26.333733  669685 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-067652 && echo "pause-067652" | sudo tee /etc/hostname
	I1027 22:34:26.494906  669685 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-067652
	
	I1027 22:34:26.495003  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:26.515741  669685 main.go:143] libmachine: Using SSH client type: native
	I1027 22:34:26.516049  669685 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1027 22:34:26.516070  669685 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-067652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-067652/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-067652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:34:26.663679  669685 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:34:26.663752  669685 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:34:26.663800  669685 ubuntu.go:190] setting up certificates
	I1027 22:34:26.663812  669685 provision.go:84] configureAuth start
	I1027 22:34:26.663891  669685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-067652
	I1027 22:34:26.685214  669685 provision.go:143] copyHostCerts
	I1027 22:34:26.685279  669685 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:34:26.685303  669685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:34:26.685373  669685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:34:26.685511  669685 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:34:26.685524  669685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:34:26.685556  669685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:34:26.685657  669685 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:34:26.685673  669685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:34:26.685707  669685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:34:26.685803  669685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.pause-067652 san=[127.0.0.1 192.168.85.2 localhost minikube pause-067652]
	I1027 22:34:27.020693  669685 provision.go:177] copyRemoteCerts
	I1027 22:34:27.020758  669685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:34:27.020800  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.043576  669685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:27.154290  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:34:27.173847  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1027 22:34:27.192196  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:34:27.210176  669685 provision.go:87] duration metric: took 546.350068ms to configureAuth
	I1027 22:34:27.210205  669685 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:34:27.210447  669685 config.go:182] Loaded profile config "pause-067652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:34:27.210563  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.230050  669685 main.go:143] libmachine: Using SSH client type: native
	I1027 22:34:27.230310  669685 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1027 22:34:27.230327  669685 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:34:27.535760  669685 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:34:27.535788  669685 machine.go:97] duration metric: took 1.38645634s to provisionDockerMachine
	I1027 22:34:27.535801  669685 start.go:293] postStartSetup for "pause-067652" (driver="docker")
	I1027 22:34:27.535810  669685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:34:27.535859  669685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:34:27.535909  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.553747  669685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:27.654843  669685 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:34:27.658534  669685 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:34:27.658565  669685 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:34:27.658576  669685 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:34:27.658636  669685 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:34:27.658707  669685 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:34:27.658801  669685 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:34:27.666691  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:34:27.685143  669685 start.go:296] duration metric: took 149.329619ms for postStartSetup
	I1027 22:34:27.685198  669685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:34:27.685259  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.702134  669685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:27.799143  669685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:34:27.804165  669685 fix.go:57] duration metric: took 1.679221538s for fixHost
	I1027 22:34:27.804192  669685 start.go:83] releasing machines lock for "pause-067652", held for 1.679265739s
	I1027 22:34:27.804257  669685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-067652
	I1027 22:34:27.822844  669685 ssh_runner.go:195] Run: cat /version.json
	I1027 22:34:27.822894  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.822972  669685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:34:27.823032  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.840935  669685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:27.845749  669685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:27.943792  669685 ssh_runner.go:195] Run: systemctl --version
	I1027 22:34:28.000936  669685 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:34:28.043444  669685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:34:28.048317  669685 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:34:28.048395  669685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:34:28.056411  669685 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:34:28.056435  669685 start.go:496] detecting cgroup driver to use...
	I1027 22:34:28.056468  669685 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:34:28.056516  669685 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:34:28.072493  669685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:34:28.086381  669685 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:34:28.086437  669685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:34:28.105217  669685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:34:28.122667  669685 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:34:28.260346  669685 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:34:28.385852  669685 docker.go:234] disabling docker service ...
	I1027 22:34:28.385926  669685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:34:28.402621  669685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:34:28.418579  669685 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:34:28.551997  669685 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:34:28.690437  669685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:34:28.704755  669685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:34:28.720750  669685 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:34:28.720799  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.731166  669685 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:34:28.731227  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.742012  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.751937  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.762245  669685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:34:28.771330  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.781459  669685 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.792488  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.802323  669685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:34:28.810323  669685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:34:28.819239  669685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:34:28.981786  669685 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:34:29.156537  669685 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:34:29.156603  669685 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:34:29.161844  669685 start.go:564] Will wait 60s for crictl version
	I1027 22:34:29.161907  669685 ssh_runner.go:195] Run: which crictl
	I1027 22:34:29.166196  669685 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:34:29.191864  669685 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:34:29.191931  669685 ssh_runner.go:195] Run: crio --version
	I1027 22:34:29.224956  669685 ssh_runner.go:195] Run: crio --version
	I1027 22:34:29.260519  671436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:34:29.260527  669685 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:34:29.261478  671436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:34:29.263095  671436 config.go:182] Loaded profile config "missing-upgrade-912550": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1027 22:34:29.263298  671436 config.go:182] Loaded profile config "pause-067652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:34:29.263429  671436 config.go:182] Loaded profile config "stopped-upgrade-126023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1027 22:34:29.263530  671436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:34:29.290740  671436 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:34:29.290834  671436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:34:29.363629  671436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-27 22:34:29.349340355 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:34:29.363762  671436 docker.go:318] overlay module found
	I1027 22:34:29.365435  671436 out.go:179] * Using the docker driver based on user configuration
	I1027 22:34:29.366321  671436 start.go:307] selected driver: docker
	I1027 22:34:29.366339  671436 start.go:928] validating driver "docker" against <nil>
	I1027 22:34:29.366363  671436 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:34:29.367113  671436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:34:29.436874  671436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-27 22:34:29.425360607 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:34:29.437131  671436 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:34:29.437432  671436 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 22:34:29.439002  671436 out.go:179] * Using Docker driver with root privileges
	I1027 22:34:29.439883  671436 cni.go:84] Creating CNI manager for ""
	I1027 22:34:29.440001  671436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:34:29.440018  671436 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:34:29.440100  671436 start.go:351] cluster config:
	{Name:kubernetes-upgrade-695499 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-695499 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:34:29.441156  671436 out.go:179] * Starting "kubernetes-upgrade-695499" primary control-plane node in "kubernetes-upgrade-695499" cluster
	I1027 22:34:29.442327  671436 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:34:29.443760  671436 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:34:29.444618  671436 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 22:34:29.444669  671436 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1027 22:34:29.444687  671436 cache.go:59] Caching tarball of preloaded images
	I1027 22:34:29.444720  671436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:34:29.444790  671436 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:34:29.444806  671436 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1027 22:34:29.444919  671436 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/config.json ...
	I1027 22:34:29.444966  671436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/config.json: {Name:mk3497c50cd3b88d50ee8a4b6f0b69e927b8bc30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.468544  671436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:34:29.468561  671436 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:34:29.468578  671436 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:34:29.468619  671436 start.go:360] acquireMachinesLock for kubernetes-upgrade-695499: {Name:mkec40f5d86362c3c0e1baba0d014c7a6178b3d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:34:29.468725  671436 start.go:364] duration metric: took 85.374µs to acquireMachinesLock for "kubernetes-upgrade-695499"
	I1027 22:34:29.468760  671436 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-695499 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-695499 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:34:29.468847  671436 start.go:125] createHost starting for "" (driver="docker")
	I1027 22:34:26.623080  666396 out.go:204]   - Generating certificates and keys ...
	I1027 22:34:26.623234  666396 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1027 22:34:26.623343  666396 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1027 22:34:26.720250  666396 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:34:26.968295  666396 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:34:27.299257  666396 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:34:27.682908  666396 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1027 22:34:27.884133  666396 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1027 22:34:27.884337  666396 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost missing-upgrade-912550] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 22:34:28.100811  666396 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1027 22:34:28.101066  666396 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost missing-upgrade-912550] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 22:34:28.207815  666396 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:34:28.428714  666396 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:34:28.521195  666396 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1027 22:34:28.521473  666396 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:34:28.794819  666396 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:34:29.186635  666396 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:34:29.440040  666396 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:34:29.563995  666396 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:34:29.564491  666396 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:34:29.570689  666396 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:34:29.572095  666396 out.go:204]   - Booting up control plane ...
	I1027 22:34:29.572222  666396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:34:29.572329  666396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:34:29.572803  666396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:34:29.584093  666396 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:34:29.585164  666396 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:34:29.585220  666396 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1027 22:34:29.680640  666396 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1027 22:34:29.261459  669685 cli_runner.go:164] Run: docker network inspect pause-067652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:34:29.283650  669685 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 22:34:29.289045  669685 kubeadm.go:884] updating cluster {Name:pause-067652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-067652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:34:29.289206  669685 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:34:29.289245  669685 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:34:29.331503  669685 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:34:29.331571  669685 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:34:29.331659  669685 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:34:29.369543  669685 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:34:29.369563  669685 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:34:29.369570  669685 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 22:34:29.369686  669685 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-067652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-067652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:34:29.369760  669685 ssh_runner.go:195] Run: crio config
	I1027 22:34:29.433998  669685 cni.go:84] Creating CNI manager for ""
	I1027 22:34:29.434024  669685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:34:29.434055  669685 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:34:29.434085  669685 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-067652 NodeName:pause-067652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:34:29.434255  669685 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-067652"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:34:29.434318  669685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:34:29.443500  669685 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:34:29.443568  669685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:34:29.451847  669685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1027 22:34:29.467051  669685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:34:29.480474  669685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1027 22:34:29.494655  669685 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:34:29.498721  669685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:34:29.643513  669685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:34:29.658999  669685 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652 for IP: 192.168.85.2
	I1027 22:34:29.659020  669685 certs.go:195] generating shared ca certs ...
	I1027 22:34:29.659041  669685 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.659219  669685 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:34:29.659290  669685 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:34:29.659304  669685 certs.go:257] generating profile certs ...
	I1027 22:34:29.659424  669685 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/client.key
	I1027 22:34:29.659494  669685 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/apiserver.key.613d8030
	I1027 22:34:29.659554  669685 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/proxy-client.key
	I1027 22:34:29.659712  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:34:29.659755  669685 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:34:29.659771  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:34:29.659803  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:34:29.659846  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:34:29.659877  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:34:29.659964  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:34:29.660913  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:34:29.684540  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:34:29.709125  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:34:29.728100  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:34:29.749352  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 22:34:29.770133  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:34:29.790451  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:34:29.813088  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:34:29.835076  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:34:29.856633  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:34:29.875789  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:34:29.895996  669685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:34:29.913742  669685 ssh_runner.go:195] Run: openssl version
	I1027 22:34:29.923252  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:34:29.937818  669685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.942617  669685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.942677  669685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.994836  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:34:30.025584  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:34:30.038341  669685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:34:30.044371  669685 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:34:30.044440  669685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:34:30.100081  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:34:30.110343  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:34:30.120144  669685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:34:30.125109  669685 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:34:30.125181  669685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:34:30.170088  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:34:30.180253  669685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:34:30.184885  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:34:30.232283  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:34:30.274015  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:34:30.310067  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:34:30.346541  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:34:30.381518  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:34:30.417937  669685 kubeadm.go:401] StartCluster: {Name:pause-067652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-067652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:34:30.418084  669685 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:34:30.418166  669685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:34:30.450903  669685 cri.go:89] found id: "a5b7b50d4040b9768126fbb779cf62f65a7546364bfa376d804072902a378d17"
	I1027 22:34:30.450934  669685 cri.go:89] found id: "4d25924c4df6aa14a77cfaec45632997d587987e4bcaec77378f5187ae9edcd7"
	I1027 22:34:30.450940  669685 cri.go:89] found id: "7e91149f673e2c10688e02752d115bb3adcae29b62e2fd2dc44183254068dcaa"
	I1027 22:34:30.450964  669685 cri.go:89] found id: "6eb92e57262ba23c48023890df53c802fa6236f0703a6780cc2f07abf8afe516"
	I1027 22:34:30.450969  669685 cri.go:89] found id: "81f2d85ae3519636b9310c3e124b232192aee71611336d7820879bc258ccf577"
	I1027 22:34:30.450973  669685 cri.go:89] found id: "c80661f95d0b8aa17deb1a4d771ca76f6e8d600e9608fcaf7bcdd6b3d302948e"
	I1027 22:34:30.450978  669685 cri.go:89] found id: "4c2ddd4a18261619f09936f5cf6470363cfbdd6270130ee2f201206e52c924e3"
	I1027 22:34:30.450981  669685 cri.go:89] found id: ""
	I1027 22:34:30.451029  669685 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:34:30.463777  669685 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:34:30Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:34:30.463869  669685 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:34:30.473401  669685 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:34:30.473424  669685 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:34:30.473494  669685 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:34:30.482433  669685 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:30.483170  669685 kubeconfig.go:125] found "pause-067652" server: "https://192.168.85.2:8443"
	I1027 22:34:30.484373  669685 kapi.go:59] client config for pause-067652: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/client.key", CAFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:34:30.484970  669685 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 22:34:30.484997  669685 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 22:34:30.485004  669685 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 22:34:30.485010  669685 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 22:34:30.485016  669685 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 22:34:30.485436  669685 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:34:30.494886  669685 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 22:34:30.494923  669685 kubeadm.go:602] duration metric: took 21.49202ms to restartPrimaryControlPlane
	I1027 22:34:30.494934  669685 kubeadm.go:403] duration metric: took 77.010887ms to StartCluster
	I1027 22:34:30.494966  669685 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:30.495041  669685 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:34:30.496195  669685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:30.496853  669685 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:34:30.496982  669685 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:34:30.497124  669685 config.go:182] Loaded profile config "pause-067652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:34:30.498759  669685 out.go:179] * Verifying Kubernetes components...
	I1027 22:34:30.498756  669685 out.go:179] * Enabled addons: 
	I1027 22:34:30.499767  669685 addons.go:514] duration metric: took 2.797292ms for enable addons: enabled=[]
	I1027 22:34:30.499808  669685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:34:30.666203  669685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:34:30.688703  669685 node_ready.go:35] waiting up to 6m0s for node "pause-067652" to be "Ready" ...
	I1027 22:34:30.704082  669685 node_ready.go:49] node "pause-067652" is "Ready"
	I1027 22:34:30.704121  669685 node_ready.go:38] duration metric: took 15.376352ms for node "pause-067652" to be "Ready" ...
	I1027 22:34:30.704141  669685 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:34:30.704208  669685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:34:30.722123  669685 api_server.go:72] duration metric: took 225.222266ms to wait for apiserver process to appear ...
	I1027 22:34:30.722169  669685 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:34:30.722195  669685 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 22:34:30.729872  669685 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 22:34:30.731345  669685 api_server.go:141] control plane version: v1.34.1
	I1027 22:34:30.731379  669685 api_server.go:131] duration metric: took 9.198013ms to wait for apiserver health ...
	I1027 22:34:30.731390  669685 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:34:30.736407  669685 system_pods.go:59] 7 kube-system pods found
	I1027 22:34:30.736462  669685 system_pods.go:61] "coredns-66bc5c9577-b87mn" [813bc0ca-bc78-4362-9408-c9d3da00c90a] Running
	I1027 22:34:30.736474  669685 system_pods.go:61] "etcd-pause-067652" [a81dd29d-0459-4cfb-9526-da184102a77a] Running
	I1027 22:34:30.736481  669685 system_pods.go:61] "kindnet-m9bfp" [e3297a67-3f34-4a07-b21e-7bf6c8417586] Running
	I1027 22:34:30.736487  669685 system_pods.go:61] "kube-apiserver-pause-067652" [92dbf739-c2eb-4ed8-8cff-c62d557fb0c4] Running
	I1027 22:34:30.736494  669685 system_pods.go:61] "kube-controller-manager-pause-067652" [b22f0aa8-ce68-4380-945b-e2e926b86f1f] Running
	I1027 22:34:30.736500  669685 system_pods.go:61] "kube-proxy-zhh4l" [2f530998-6842-4db5-bbe1-359bdee56be3] Running
	I1027 22:34:30.736506  669685 system_pods.go:61] "kube-scheduler-pause-067652" [fc2e9923-17bc-4c0f-aaa8-de68234a9d2d] Running
	I1027 22:34:30.736515  669685 system_pods.go:74] duration metric: took 5.116524ms to wait for pod list to return data ...
	I1027 22:34:30.736529  669685 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:34:30.739287  669685 default_sa.go:45] found service account: "default"
	I1027 22:34:30.739345  669685 default_sa.go:55] duration metric: took 2.807504ms for default service account to be created ...
	I1027 22:34:30.739357  669685 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:34:30.744392  669685 system_pods.go:86] 7 kube-system pods found
	I1027 22:34:30.744424  669685 system_pods.go:89] "coredns-66bc5c9577-b87mn" [813bc0ca-bc78-4362-9408-c9d3da00c90a] Running
	I1027 22:34:30.744431  669685 system_pods.go:89] "etcd-pause-067652" [a81dd29d-0459-4cfb-9526-da184102a77a] Running
	I1027 22:34:30.744436  669685 system_pods.go:89] "kindnet-m9bfp" [e3297a67-3f34-4a07-b21e-7bf6c8417586] Running
	I1027 22:34:30.744441  669685 system_pods.go:89] "kube-apiserver-pause-067652" [92dbf739-c2eb-4ed8-8cff-c62d557fb0c4] Running
	I1027 22:34:30.744446  669685 system_pods.go:89] "kube-controller-manager-pause-067652" [b22f0aa8-ce68-4380-945b-e2e926b86f1f] Running
	I1027 22:34:30.744451  669685 system_pods.go:89] "kube-proxy-zhh4l" [2f530998-6842-4db5-bbe1-359bdee56be3] Running
	I1027 22:34:30.744456  669685 system_pods.go:89] "kube-scheduler-pause-067652" [fc2e9923-17bc-4c0f-aaa8-de68234a9d2d] Running
	I1027 22:34:30.744467  669685 system_pods.go:126] duration metric: took 5.101203ms to wait for k8s-apps to be running ...
	I1027 22:34:30.744476  669685 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:34:30.744529  669685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:34:30.764410  669685 system_svc.go:56] duration metric: took 19.904063ms WaitForService to wait for kubelet
	I1027 22:34:30.764441  669685 kubeadm.go:587] duration metric: took 267.54844ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:34:30.764463  669685 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:34:30.768730  669685 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:34:30.768836  669685 node_conditions.go:123] node cpu capacity is 8
	I1027 22:34:30.768871  669685 node_conditions.go:105] duration metric: took 4.402013ms to run NodePressure ...
	I1027 22:34:30.768888  669685 start.go:242] waiting for startup goroutines ...
	I1027 22:34:30.768898  669685 start.go:247] waiting for cluster config update ...
	I1027 22:34:30.768908  669685 start.go:256] writing updated cluster config ...
	I1027 22:34:30.769470  669685 ssh_runner.go:195] Run: rm -f paused
	I1027 22:34:30.775007  669685 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:34:30.776084  669685 kapi.go:59] client config for pause-067652: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/client.key", CAFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:34:30.780239  669685 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b87mn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.786443  669685 pod_ready.go:94] pod "coredns-66bc5c9577-b87mn" is "Ready"
	I1027 22:34:30.786471  669685 pod_ready.go:86] duration metric: took 6.203849ms for pod "coredns-66bc5c9577-b87mn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.788720  669685 pod_ready.go:83] waiting for pod "etcd-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.794297  669685 pod_ready.go:94] pod "etcd-pause-067652" is "Ready"
	I1027 22:34:30.794326  669685 pod_ready.go:86] duration metric: took 5.579931ms for pod "etcd-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.796976  669685 pod_ready.go:83] waiting for pod "kube-apiserver-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.801693  669685 pod_ready.go:94] pod "kube-apiserver-pause-067652" is "Ready"
	I1027 22:34:30.801717  669685 pod_ready.go:86] duration metric: took 4.661466ms for pod "kube-apiserver-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.804169  669685 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:28.671612  667713 cli_runner.go:164] Run: docker network inspect stopped-upgrade-126023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:34:28.689516  667713 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1027 22:34:28.694548  667713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:34:28.706965  667713 kubeadm.go:884] updating cluster {Name:stopped-upgrade-126023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-126023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:34:28.707106  667713 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1027 22:34:28.707176  667713 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:34:28.754884  667713 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:34:28.754909  667713 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:34:28.754987  667713 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:34:28.793608  667713 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:34:28.793633  667713 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:34:28.793642  667713 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.3 crio true true} ...
	I1027 22:34:28.793770  667713 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-126023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-126023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:34:28.793848  667713 ssh_runner.go:195] Run: crio config
	I1027 22:34:28.844764  667713 cni.go:84] Creating CNI manager for ""
	I1027 22:34:28.844790  667713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:34:28.844818  667713 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:34:28.844847  667713 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-126023 NodeName:stopped-upgrade-126023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:34:28.845044  667713 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-126023"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:34:28.845129  667713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1027 22:34:28.856340  667713 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:34:28.856421  667713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:34:28.872625  667713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1027 22:34:28.891933  667713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:34:28.910069  667713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1027 22:34:28.929276  667713 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:34:28.932824  667713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:34:28.945289  667713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:34:29.027854  667713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:34:29.042834  667713 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023 for IP: 192.168.103.2
	I1027 22:34:29.042857  667713 certs.go:195] generating shared ca certs ...
	I1027 22:34:29.042877  667713 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.043055  667713 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:34:29.043119  667713 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:34:29.043134  667713 certs.go:257] generating profile certs ...
	I1027 22:34:29.043238  667713 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/client.key
	I1027 22:34:29.043269  667713 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key.b70c58af
	I1027 22:34:29.043285  667713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt.b70c58af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1027 22:34:29.233790  667713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt.b70c58af ...
	I1027 22:34:29.233815  667713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt.b70c58af: {Name:mk51f0c0519e3f54a9207eb40d44ba11d1b909c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.234018  667713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key.b70c58af ...
	I1027 22:34:29.234040  667713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key.b70c58af: {Name:mk54efeb4f8aaab6a5119df80cf96cd4d4dfcdc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.234158  667713 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt.b70c58af -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt
	I1027 22:34:29.234341  667713 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key.b70c58af -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key
	I1027 22:34:29.234532  667713 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/proxy-client.key
	I1027 22:34:29.234680  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:34:29.234719  667713 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:34:29.234729  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:34:29.234766  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:34:29.234800  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:34:29.234828  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:34:29.234881  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:34:29.235633  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:34:29.265300  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:34:29.296990  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:34:29.329858  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:34:29.360041  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 22:34:29.392916  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:34:29.426593  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:34:29.456299  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:34:29.484406  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:34:29.512814  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:34:29.548042  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:34:29.581928  667713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:34:29.602978  667713 ssh_runner.go:195] Run: openssl version
	I1027 22:34:29.610021  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:34:29.624015  667713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:34:29.628003  667713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:34:29.628059  667713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:34:29.635960  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:34:29.646802  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:34:29.657837  667713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:34:29.662213  667713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:34:29.662280  667713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:34:29.671358  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:34:29.682441  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:34:29.696074  667713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.700428  667713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.700489  667713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.710977  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:34:29.722730  667713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:34:29.726389  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:34:29.735037  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:34:29.742918  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:34:29.750233  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:34:29.757527  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:34:29.765802  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:34:29.773092  667713 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-126023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-126023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:34:29.773177  667713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:34:29.773228  667713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:34:29.814731  667713 cri.go:89] found id: ""
	I1027 22:34:29.814796  667713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W1027 22:34:29.826811  667713 kubeadm.go:414] apiserver tunnel failed: apiserver port not set
	I1027 22:34:29.826830  667713 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:34:29.826834  667713 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:34:29.826893  667713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:34:29.837872  667713 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:29.838748  667713 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-126023" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:34:29.839174  667713 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-482142/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-126023" cluster setting kubeconfig missing "stopped-upgrade-126023" context setting]
	I1027 22:34:29.839766  667713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.840709  667713 kapi.go:59] client config for stopped-upgrade-126023: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/client.key", CAFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:34:29.841365  667713 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 22:34:29.841391  667713 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 22:34:29.841405  667713 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 22:34:29.841417  667713 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 22:34:29.841424  667713 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 22:34:29.841918  667713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:34:29.853128  667713 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-27 22:34:06.070695325 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-27 22:34:28.926158516 +0000
	@@ -50,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: systemd
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I1027 22:34:29.853146  667713 kubeadm.go:1161] stopping kube-system containers ...
	I1027 22:34:29.853160  667713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1027 22:34:29.853208  667713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:34:29.895272  667713 cri.go:89] found id: ""
	I1027 22:34:29.895347  667713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1027 22:34:29.928571  667713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:34:29.939148  667713 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Oct 27 22:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct 27 22:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 27 22:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 27 22:34 /etc/kubernetes/scheduler.conf
	
	I1027 22:34:29.939224  667713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1027 22:34:29.951145  667713 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:29.951223  667713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:34:29.963389  667713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1027 22:34:29.975545  667713 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:29.975607  667713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:34:29.986293  667713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1027 22:34:29.997517  667713 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:29.997588  667713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:34:30.026825  667713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1027 22:34:30.040514  667713 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:30.040576  667713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:34:30.057800  667713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:34:30.070718  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:30.130595  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:30.933854  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:31.112785  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:31.192360  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:31.269819  667713 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:34:31.269908  667713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:34:31.770101  667713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:34:31.180881  669685 pod_ready.go:94] pod "kube-controller-manager-pause-067652" is "Ready"
	I1027 22:34:31.180923  669685 pod_ready.go:86] duration metric: took 376.728008ms for pod "kube-controller-manager-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:31.380532  669685 pod_ready.go:83] waiting for pod "kube-proxy-zhh4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:31.780439  669685 pod_ready.go:94] pod "kube-proxy-zhh4l" is "Ready"
	I1027 22:34:31.780476  669685 pod_ready.go:86] duration metric: took 399.914858ms for pod "kube-proxy-zhh4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:31.980972  669685 pod_ready.go:83] waiting for pod "kube-scheduler-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:32.379936  669685 pod_ready.go:94] pod "kube-scheduler-pause-067652" is "Ready"
	I1027 22:34:32.379985  669685 pod_ready.go:86] duration metric: took 398.980763ms for pod "kube-scheduler-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:32.380002  669685 pod_ready.go:40] duration metric: took 1.604936403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:34:32.456477  669685 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:34:32.460142  669685 out.go:179] * Done! kubectl is now configured to use "pause-067652" cluster and "default" namespace by default
	I1027 22:34:29.472990  671436 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:34:29.473256  671436 start.go:159] libmachine.API.Create for "kubernetes-upgrade-695499" (driver="docker")
	I1027 22:34:29.473289  671436 client.go:173] LocalClient.Create starting
	I1027 22:34:29.473392  671436 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 22:34:29.473431  671436 main.go:143] libmachine: Decoding PEM data...
	I1027 22:34:29.473465  671436 main.go:143] libmachine: Parsing certificate...
	I1027 22:34:29.473547  671436 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 22:34:29.473571  671436 main.go:143] libmachine: Decoding PEM data...
	I1027 22:34:29.473583  671436 main.go:143] libmachine: Parsing certificate...
	I1027 22:34:29.474027  671436 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-695499 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:34:29.493560  671436 cli_runner.go:211] docker network inspect kubernetes-upgrade-695499 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:34:29.493631  671436 network_create.go:284] running [docker network inspect kubernetes-upgrade-695499] to gather additional debugging logs...
	I1027 22:34:29.493653  671436 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-695499
	W1027 22:34:29.511680  671436 cli_runner.go:211] docker network inspect kubernetes-upgrade-695499 returned with exit code 1
	I1027 22:34:29.511744  671436 network_create.go:287] error running [docker network inspect kubernetes-upgrade-695499]: docker network inspect kubernetes-upgrade-695499: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-695499 not found
	I1027 22:34:29.511773  671436 network_create.go:289] output of [docker network inspect kubernetes-upgrade-695499]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-695499 not found
	
	** /stderr **
	I1027 22:34:29.511888  671436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:34:29.531112  671436 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
	I1027 22:34:29.532204  671436 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2deffb37428 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:63:99:4f:c9:29} reservation:<nil>}
	I1027 22:34:29.532833  671436 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8aa1ad217c0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:19:7b:f4:de:20} reservation:<nil>}
	I1027 22:34:29.534057  671436 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d7ff60}
	I1027 22:34:29.534101  671436 network_create.go:124] attempt to create docker network kubernetes-upgrade-695499 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 22:34:29.534184  671436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-695499 kubernetes-upgrade-695499
	I1027 22:34:29.608963  671436 network_create.go:108] docker network kubernetes-upgrade-695499 192.168.76.0/24 created
	I1027 22:34:29.609002  671436 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-695499" container
	I1027 22:34:29.609077  671436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:34:29.631844  671436 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-695499 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-695499 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:34:29.652007  671436 oci.go:103] Successfully created a docker volume kubernetes-upgrade-695499
	I1027 22:34:29.652138  671436 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-695499-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-695499 --entrypoint /usr/bin/test -v kubernetes-upgrade-695499:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:34:30.077102  671436 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-695499
	I1027 22:34:30.077146  671436 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 22:34:30.077171  671436 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:34:30.077261  671436 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-695499:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.093442597Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.094300047Z" level=info msg="Conmon does support the --sync option"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.094326139Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.094339107Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.095046827Z" level=info msg="Conmon does support the --sync option"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.095060397Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.099281943Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.099304025Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.099916972Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.100424904Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.100484472Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.106862533Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.151051056Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-b87mn Namespace:kube-system ID:991950f5e1a656a6f60eaadc37fe143f3dd6312ea3afa1fa0e68ea6ce86df079 UID:813bc0ca-bc78-4362-9408-c9d3da00c90a NetNS:/var/run/netns/837eb9bb-d938-4f05-9ad9-bbf3727b8bf1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004a4470}] Aliases:map[]}"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.151326459Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-b87mn for CNI network kindnet (type=ptp)"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.151836789Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.151863441Z" level=info msg="Starting seccomp notifier watcher"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.151918678Z" level=info msg="Create NRI interface"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152060685Z" level=info msg="built-in NRI default validator is disabled"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152078114Z" level=info msg="runtime interface created"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152095547Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152104407Z" level=info msg="runtime interface starting up..."
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152113546Z" level=info msg="starting plugins..."
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152140924Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152595254Z" level=info msg="No systemd watchdog enabled"
	Oct 27 22:34:29 pause-067652 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a5b7b50d4040b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   991950f5e1a65       coredns-66bc5c9577-b87mn               kube-system
	4d25924c4df6a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   389be2b814e33       kube-proxy-zhh4l                       kube-system
	7e91149f673e2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   25054d9b44d58       kindnet-m9bfp                          kube-system
	6eb92e57262ba       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   f88cb9ca3c2a4       kube-scheduler-pause-067652            kube-system
	81f2d85ae3519       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   03f80a084c9c1       kube-apiserver-pause-067652            kube-system
	c80661f95d0b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   87ccd61347299       etcd-pause-067652                      kube-system
	4c2ddd4a18261       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   f8452ad9be944       kube-controller-manager-pause-067652   kube-system
	
	
	==> coredns [a5b7b50d4040b9768126fbb779cf62f65a7546364bfa376d804072902a378d17] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56539 - 56226 "HINFO IN 1511707238290419606.4793965326947131984. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029915802s
	
	
	==> describe nodes <==
	Name:               pause-067652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-067652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=pause-067652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_34_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:34:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-067652
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:34:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:34:22 +0000   Mon, 27 Oct 2025 22:34:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:34:22 +0000   Mon, 27 Oct 2025 22:34:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:34:22 +0000   Mon, 27 Oct 2025 22:34:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:34:22 +0000   Mon, 27 Oct 2025 22:34:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-067652
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                4fd845e8-f765-4510-addc-0aac115564fd
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-b87mn                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-067652                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-m9bfp                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-067652             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-067652    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-zhh4l                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-067652             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node pause-067652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node pause-067652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node pause-067652 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node pause-067652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node pause-067652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node pause-067652 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node pause-067652 event: Registered Node pause-067652 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-067652 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [c80661f95d0b8aa17deb1a4d771ca76f6e8d600e9608fcaf7bcdd6b3d302948e] <==
	{"level":"warn","ts":"2025-10-27T22:34:02.496448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.509903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.523797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.539812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.554964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.574766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.583523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.612804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.616518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.628609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.650509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.662338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.677567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.686539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.695874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.706257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.719229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.735861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.763533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.772647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.821519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.825790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.836450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.903201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45914","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T22:34:19.041218Z","caller":"traceutil/trace.go:172","msg":"trace[2125268942] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"116.286183ms","start":"2025-10-27T22:34:18.924907Z","end":"2025-10-27T22:34:19.041193Z","steps":["trace[2125268942] 'process raft request'  (duration: 95.991591ms)","trace[2125268942] 'compare'  (duration: 20.125428ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:34:36 up  2:16,  0 user,  load average: 2.74, 1.41, 2.45
	Linux pause-067652 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e91149f673e2c10688e02752d115bb3adcae29b62e2fd2dc44183254068dcaa] <==
	I1027 22:34:12.209814       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:34:12.210248       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 22:34:12.210468       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:34:12.210492       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:34:12.210523       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:34:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:34:12.506546       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:34:12.506579       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:34:12.506591       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:34:12.506803       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:34:13.007805       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:34:13.007928       1 metrics.go:72] Registering metrics
	I1027 22:34:13.008069       1 controller.go:711] "Syncing nftables rules"
	I1027 22:34:22.417397       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:34:22.417447       1 main.go:301] handling current node
	I1027 22:34:32.421027       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:34:32.421069       1 main.go:301] handling current node
	
	
	==> kube-apiserver [81f2d85ae3519636b9310c3e124b232192aee71611336d7820879bc258ccf577] <==
	I1027 22:34:03.647914       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1027 22:34:03.648407       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 22:34:03.648642       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 22:34:03.656331       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:34:03.668238       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 22:34:03.681324       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 22:34:03.685749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:34:03.693729       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:34:04.554228       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 22:34:04.559395       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 22:34:04.559428       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:34:05.181235       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:34:05.219401       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:34:05.264713       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 22:34:05.279491       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 22:34:05.281135       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:34:05.287289       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:34:05.612774       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:34:06.205590       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:34:06.219103       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 22:34:06.228399       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:34:11.386562       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:34:11.406311       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:34:11.458020       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 22:34:11.525863       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4c2ddd4a18261619f09936f5cf6470363cfbdd6270130ee2f201206e52c924e3] <==
	I1027 22:34:10.611487       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 22:34:10.611627       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:34:10.611762       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 22:34:10.611879       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:34:10.612511       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 22:34:10.612550       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:34:10.613239       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 22:34:10.613283       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 22:34:10.613441       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 22:34:10.614773       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:34:10.614796       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:34:10.615389       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:34:10.617473       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 22:34:10.621091       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:34:10.623294       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 22:34:10.623378       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:34:10.623448       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:34:10.623457       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:34:10.623466       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:34:10.626655       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:34:10.629826       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:34:10.633316       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-067652" podCIDRs=["10.244.0.0/24"]
	I1027 22:34:10.637001       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 22:34:10.639387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:34:25.613355       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4d25924c4df6aa14a77cfaec45632997d587987e4bcaec77378f5187ae9edcd7] <==
	I1027 22:34:12.006628       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:34:12.078577       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:34:12.179763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:34:12.179834       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 22:34:12.179992       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:34:12.208391       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:34:12.208476       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:34:12.216137       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:34:12.217106       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:34:12.217216       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:34:12.219036       1 config.go:200] "Starting service config controller"
	I1027 22:34:12.219054       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:34:12.219144       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:34:12.219189       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:34:12.219234       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:34:12.219240       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:34:12.219522       1 config.go:309] "Starting node config controller"
	I1027 22:34:12.219796       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:34:12.219866       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:34:12.319474       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:34:12.319501       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:34:12.319513       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6eb92e57262ba23c48023890df53c802fa6236f0703a6780cc2f07abf8afe516] <==
	E1027 22:34:03.662038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:34:03.662145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:34:03.662412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:34:03.662772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:34:03.667976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:34:03.668144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:34:03.668232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:34:03.668273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:34:03.668321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:34:03.668381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:34:03.668454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:34:03.669314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:34:03.669498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:34:03.669619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:34:03.669678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:34:03.669703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:34:04.578668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:34:04.605222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:34:04.731737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:34:04.837781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:34:04.838674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:34:04.863208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:34:04.875501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 22:34:04.881102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1027 22:34:07.151220       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:34:07 pause-067652 kubelet[1279]: E1027 22:34:07.090909    1279 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-067652\" already exists" pod="kube-system/etcd-pause-067652"
	Oct 27 22:34:07 pause-067652 kubelet[1279]: I1027 22:34:07.109767    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-067652" podStartSLOduration=1.109742653 podStartE2EDuration="1.109742653s" podCreationTimestamp="2025-10-27 22:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:07.108936189 +0000 UTC m=+1.137565925" watchObservedRunningTime="2025-10-27 22:34:07.109742653 +0000 UTC m=+1.138372382"
	Oct 27 22:34:07 pause-067652 kubelet[1279]: I1027 22:34:07.146821    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-067652" podStartSLOduration=1.146797789 podStartE2EDuration="1.146797789s" podCreationTimestamp="2025-10-27 22:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:07.128499432 +0000 UTC m=+1.157129157" watchObservedRunningTime="2025-10-27 22:34:07.146797789 +0000 UTC m=+1.175427521"
	Oct 27 22:34:07 pause-067652 kubelet[1279]: I1027 22:34:07.176378    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-067652" podStartSLOduration=1.176355142 podStartE2EDuration="1.176355142s" podCreationTimestamp="2025-10-27 22:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:07.176264965 +0000 UTC m=+1.204894699" watchObservedRunningTime="2025-10-27 22:34:07.176355142 +0000 UTC m=+1.204984876"
	Oct 27 22:34:07 pause-067652 kubelet[1279]: I1027 22:34:07.176532    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-067652" podStartSLOduration=2.176523478 podStartE2EDuration="2.176523478s" podCreationTimestamp="2025-10-27 22:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:07.151671492 +0000 UTC m=+1.180301236" watchObservedRunningTime="2025-10-27 22:34:07.176523478 +0000 UTC m=+1.205153215"
	Oct 27 22:34:10 pause-067652 kubelet[1279]: I1027 22:34:10.704075    1279 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 22:34:10 pause-067652 kubelet[1279]: I1027 22:34:10.704826    1279 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588474    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e3297a67-3f34-4a07-b21e-7bf6c8417586-cni-cfg\") pod \"kindnet-m9bfp\" (UID: \"e3297a67-3f34-4a07-b21e-7bf6c8417586\") " pod="kube-system/kindnet-m9bfp"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588537    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtkt\" (UniqueName: \"kubernetes.io/projected/e3297a67-3f34-4a07-b21e-7bf6c8417586-kube-api-access-6gtkt\") pod \"kindnet-m9bfp\" (UID: \"e3297a67-3f34-4a07-b21e-7bf6c8417586\") " pod="kube-system/kindnet-m9bfp"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588612    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f530998-6842-4db5-bbe1-359bdee56be3-xtables-lock\") pod \"kube-proxy-zhh4l\" (UID: \"2f530998-6842-4db5-bbe1-359bdee56be3\") " pod="kube-system/kube-proxy-zhh4l"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588636    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3297a67-3f34-4a07-b21e-7bf6c8417586-lib-modules\") pod \"kindnet-m9bfp\" (UID: \"e3297a67-3f34-4a07-b21e-7bf6c8417586\") " pod="kube-system/kindnet-m9bfp"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588664    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f530998-6842-4db5-bbe1-359bdee56be3-kube-proxy\") pod \"kube-proxy-zhh4l\" (UID: \"2f530998-6842-4db5-bbe1-359bdee56be3\") " pod="kube-system/kube-proxy-zhh4l"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588693    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2g4l\" (UniqueName: \"kubernetes.io/projected/2f530998-6842-4db5-bbe1-359bdee56be3-kube-api-access-t2g4l\") pod \"kube-proxy-zhh4l\" (UID: \"2f530998-6842-4db5-bbe1-359bdee56be3\") " pod="kube-system/kube-proxy-zhh4l"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588713    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3297a67-3f34-4a07-b21e-7bf6c8417586-xtables-lock\") pod \"kindnet-m9bfp\" (UID: \"e3297a67-3f34-4a07-b21e-7bf6c8417586\") " pod="kube-system/kindnet-m9bfp"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588741    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f530998-6842-4db5-bbe1-359bdee56be3-lib-modules\") pod \"kube-proxy-zhh4l\" (UID: \"2f530998-6842-4db5-bbe1-359bdee56be3\") " pod="kube-system/kube-proxy-zhh4l"
	Oct 27 22:34:12 pause-067652 kubelet[1279]: I1027 22:34:12.113703    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m9bfp" podStartSLOduration=1.113677811 podStartE2EDuration="1.113677811s" podCreationTimestamp="2025-10-27 22:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:12.112823802 +0000 UTC m=+6.141453537" watchObservedRunningTime="2025-10-27 22:34:12.113677811 +0000 UTC m=+6.142307544"
	Oct 27 22:34:12 pause-067652 kubelet[1279]: I1027 22:34:12.146492    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zhh4l" podStartSLOduration=1.146460882 podStartE2EDuration="1.146460882s" podCreationTimestamp="2025-10-27 22:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:12.146339754 +0000 UTC m=+6.174969488" watchObservedRunningTime="2025-10-27 22:34:12.146460882 +0000 UTC m=+6.175090613"
	Oct 27 22:34:22 pause-067652 kubelet[1279]: I1027 22:34:22.824004    1279 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 22:34:22 pause-067652 kubelet[1279]: I1027 22:34:22.975917    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/813bc0ca-bc78-4362-9408-c9d3da00c90a-config-volume\") pod \"coredns-66bc5c9577-b87mn\" (UID: \"813bc0ca-bc78-4362-9408-c9d3da00c90a\") " pod="kube-system/coredns-66bc5c9577-b87mn"
	Oct 27 22:34:22 pause-067652 kubelet[1279]: I1027 22:34:22.975993    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f25zm\" (UniqueName: \"kubernetes.io/projected/813bc0ca-bc78-4362-9408-c9d3da00c90a-kube-api-access-f25zm\") pod \"coredns-66bc5c9577-b87mn\" (UID: \"813bc0ca-bc78-4362-9408-c9d3da00c90a\") " pod="kube-system/coredns-66bc5c9577-b87mn"
	Oct 27 22:34:24 pause-067652 kubelet[1279]: I1027 22:34:24.146688    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b87mn" podStartSLOduration=13.146665769 podStartE2EDuration="13.146665769s" podCreationTimestamp="2025-10-27 22:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:24.146146545 +0000 UTC m=+18.174776278" watchObservedRunningTime="2025-10-27 22:34:24.146665769 +0000 UTC m=+18.175295501"
	Oct 27 22:34:33 pause-067652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:34:33 pause-067652 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:34:33 pause-067652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 22:34:33 pause-067652 systemd[1]: kubelet.service: Consumed 1.238s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-067652 -n pause-067652
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-067652 -n pause-067652: exit status 2 (379.798091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-067652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-067652
helpers_test.go:243: (dbg) docker inspect pause-067652:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca",
	        "Created": "2025-10-27T22:33:48.085449702Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 657723,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:33:48.143775362Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca/hosts",
	        "LogPath": "/var/lib/docker/containers/8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca/8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca-json.log",
	        "Name": "/pause-067652",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-067652:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-067652",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d2b598050e963548d4c129825036aa56338fd86fe7a4d8409e7ea795fecceca",
	                "LowerDir": "/var/lib/docker/overlay2/ab8c8f9a09c9d1a253b0dc61a821ab0c205705452a5f9f69707b341cdcbf8436-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ab8c8f9a09c9d1a253b0dc61a821ab0c205705452a5f9f69707b341cdcbf8436/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ab8c8f9a09c9d1a253b0dc61a821ab0c205705452a5f9f69707b341cdcbf8436/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ab8c8f9a09c9d1a253b0dc61a821ab0c205705452a5f9f69707b341cdcbf8436/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-067652",
	                "Source": "/var/lib/docker/volumes/pause-067652/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-067652",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-067652",
	                "name.minikube.sigs.k8s.io": "pause-067652",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "57f64ef647568f4c5e017af294834eeefd956b5419a72167502f408cadaa580f",
	            "SandboxKey": "/var/run/docker/netns/57f64ef64756",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-067652": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:26:63:c5:aa:53",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c9de812c0cccf26ebda388d884059189901c88208c87e7ef90581d800110902",
	                    "EndpointID": "c035eda68a293e41521334679799966f4c417fee48601687fb1b41280e0e7526",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-067652",
	                        "8d2b598050e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-067652 -n pause-067652
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-067652 -n pause-067652: exit status 2 (379.703614ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-067652 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-067652 logs -n 25: (1.142456541s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-589123 --schedule 5m                                                                                                   │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 5m                                                                                                   │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 5m                                                                                                   │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --cancel-scheduled                                                                                              │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │ 27 Oct 25 22:32 UTC │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │                     │
	│ stop    │ -p scheduled-stop-589123 --schedule 15s                                                                                                  │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:32 UTC │ 27 Oct 25 22:32 UTC │
	│ delete  │ -p scheduled-stop-589123                                                                                                                 │ scheduled-stop-589123       │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:33 UTC │
	│ start   │ -p insufficient-storage-960733 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-960733 │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │                     │
	│ delete  │ -p insufficient-storage-960733                                                                                                           │ insufficient-storage-960733 │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:33 UTC │
	│ start   │ -p force-systemd-env-078908 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-078908    │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p pause-067652 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-067652                │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p offline-crio-037558 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-037558         │ jenkins │ v1.37.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p stopped-upgrade-126023 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-126023      │ jenkins │ v1.32.0 │ 27 Oct 25 22:33 UTC │ 27 Oct 25 22:34 UTC │
	│ delete  │ -p force-systemd-env-078908                                                                                                              │ force-systemd-env-078908    │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p missing-upgrade-912550 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-912550      │ jenkins │ v1.32.0 │ 27 Oct 25 22:34 UTC │                     │
	│ stop    │ stopped-upgrade-126023 stop                                                                                                              │ stopped-upgrade-126023      │ jenkins │ v1.32.0 │ 27 Oct 25 22:34 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p stopped-upgrade-126023 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-126023      │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │                     │
	│ start   │ -p pause-067652 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-067652                │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │ 27 Oct 25 22:34 UTC │
	│ delete  │ -p offline-crio-037558                                                                                                                   │ offline-crio-037558         │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │ 27 Oct 25 22:34 UTC │
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-695499   │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │                     │
	│ pause   │ -p pause-067652 --alsologtostderr -v=5                                                                                                   │ pause-067652                │ jenkins │ v1.37.0 │ 27 Oct 25 22:34 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:34:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:34:29.248123  671436 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:34:29.248422  671436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:34:29.248432  671436 out.go:374] Setting ErrFile to fd 2...
	I1027 22:34:29.248436  671436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:34:29.248674  671436 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:34:29.249186  671436 out.go:368] Setting JSON to false
	I1027 22:34:29.250492  671436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8208,"bootTime":1761596261,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:34:29.250585  671436 start.go:143] virtualization: kvm guest
	I1027 22:34:29.252781  671436 out.go:179] * [kubernetes-upgrade-695499] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:34:29.253787  671436 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:34:29.253810  671436 notify.go:221] Checking for updates...
	I1027 22:34:29.256409  671436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:34:29.257503  671436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:34:29.258382  671436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:34:26.149286  669685 out.go:252] * Updating the running docker "pause-067652" container ...
	I1027 22:34:26.149325  669685 machine.go:94] provisionDockerMachine start ...
	I1027 22:34:26.149414  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:26.168233  669685 main.go:143] libmachine: Using SSH client type: native
	I1027 22:34:26.168536  669685 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1027 22:34:26.168553  669685 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:34:26.312409  669685 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-067652
	
	I1027 22:34:26.312486  669685 ubuntu.go:182] provisioning hostname "pause-067652"
	I1027 22:34:26.312575  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:26.333469  669685 main.go:143] libmachine: Using SSH client type: native
	I1027 22:34:26.333712  669685 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1027 22:34:26.333733  669685 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-067652 && echo "pause-067652" | sudo tee /etc/hostname
	I1027 22:34:26.494906  669685 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-067652
	
	I1027 22:34:26.495003  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:26.515741  669685 main.go:143] libmachine: Using SSH client type: native
	I1027 22:34:26.516049  669685 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1027 22:34:26.516070  669685 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-067652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-067652/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-067652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:34:26.663679  669685 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:34:26.663752  669685 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:34:26.663800  669685 ubuntu.go:190] setting up certificates
	I1027 22:34:26.663812  669685 provision.go:84] configureAuth start
	I1027 22:34:26.663891  669685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-067652
	I1027 22:34:26.685214  669685 provision.go:143] copyHostCerts
	I1027 22:34:26.685279  669685 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:34:26.685303  669685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:34:26.685373  669685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:34:26.685511  669685 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:34:26.685524  669685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:34:26.685556  669685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:34:26.685657  669685 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:34:26.685673  669685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:34:26.685707  669685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:34:26.685803  669685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.pause-067652 san=[127.0.0.1 192.168.85.2 localhost minikube pause-067652]
	I1027 22:34:27.020693  669685 provision.go:177] copyRemoteCerts
	I1027 22:34:27.020758  669685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:34:27.020800  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.043576  669685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:27.154290  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:34:27.173847  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1027 22:34:27.192196  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:34:27.210176  669685 provision.go:87] duration metric: took 546.350068ms to configureAuth
	I1027 22:34:27.210205  669685 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:34:27.210447  669685 config.go:182] Loaded profile config "pause-067652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:34:27.210563  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.230050  669685 main.go:143] libmachine: Using SSH client type: native
	I1027 22:34:27.230310  669685 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1027 22:34:27.230327  669685 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:34:27.535760  669685 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:34:27.535788  669685 machine.go:97] duration metric: took 1.38645634s to provisionDockerMachine
	I1027 22:34:27.535801  669685 start.go:293] postStartSetup for "pause-067652" (driver="docker")
	I1027 22:34:27.535810  669685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:34:27.535859  669685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:34:27.535909  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.553747  669685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:27.654843  669685 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:34:27.658534  669685 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:34:27.658565  669685 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:34:27.658576  669685 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:34:27.658636  669685 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:34:27.658707  669685 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:34:27.658801  669685 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:34:27.666691  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:34:27.685143  669685 start.go:296] duration metric: took 149.329619ms for postStartSetup
	I1027 22:34:27.685198  669685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:34:27.685259  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.702134  669685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:27.799143  669685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:34:27.804165  669685 fix.go:57] duration metric: took 1.679221538s for fixHost
	I1027 22:34:27.804192  669685 start.go:83] releasing machines lock for "pause-067652", held for 1.679265739s
	I1027 22:34:27.804257  669685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-067652
	I1027 22:34:27.822844  669685 ssh_runner.go:195] Run: cat /version.json
	I1027 22:34:27.822894  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.822972  669685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:34:27.823032  669685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-067652
	I1027 22:34:27.840935  669685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:27.845749  669685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/pause-067652/id_rsa Username:docker}
	I1027 22:34:27.943792  669685 ssh_runner.go:195] Run: systemctl --version
	I1027 22:34:28.000936  669685 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:34:28.043444  669685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:34:28.048317  669685 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:34:28.048395  669685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:34:28.056411  669685 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:34:28.056435  669685 start.go:496] detecting cgroup driver to use...
	I1027 22:34:28.056468  669685 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:34:28.056516  669685 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:34:28.072493  669685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:34:28.086381  669685 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:34:28.086437  669685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:34:28.105217  669685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:34:28.122667  669685 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:34:28.260346  669685 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:34:28.385852  669685 docker.go:234] disabling docker service ...
	I1027 22:34:28.385926  669685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:34:28.402621  669685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:34:28.418579  669685 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:34:28.551997  669685 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:34:28.690437  669685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:34:28.704755  669685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:34:28.720750  669685 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:34:28.720799  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.731166  669685 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:34:28.731227  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.742012  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.751937  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.762245  669685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:34:28.771330  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.781459  669685 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.792488  669685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:34:28.802323  669685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:34:28.810323  669685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:34:28.819239  669685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:34:28.981786  669685 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:34:29.156537  669685 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:34:29.156603  669685 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:34:29.161844  669685 start.go:564] Will wait 60s for crictl version
	I1027 22:34:29.161907  669685 ssh_runner.go:195] Run: which crictl
	I1027 22:34:29.166196  669685 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:34:29.191864  669685 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:34:29.191931  669685 ssh_runner.go:195] Run: crio --version
	I1027 22:34:29.224956  669685 ssh_runner.go:195] Run: crio --version
	I1027 22:34:29.260519  671436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:34:29.260527  669685 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:34:29.261478  671436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:34:29.263095  671436 config.go:182] Loaded profile config "missing-upgrade-912550": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1027 22:34:29.263298  671436 config.go:182] Loaded profile config "pause-067652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:34:29.263429  671436 config.go:182] Loaded profile config "stopped-upgrade-126023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1027 22:34:29.263530  671436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:34:29.290740  671436 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:34:29.290834  671436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:34:29.363629  671436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-27 22:34:29.349340355 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:34:29.363762  671436 docker.go:318] overlay module found
	I1027 22:34:29.365435  671436 out.go:179] * Using the docker driver based on user configuration
	I1027 22:34:29.366321  671436 start.go:307] selected driver: docker
	I1027 22:34:29.366339  671436 start.go:928] validating driver "docker" against <nil>
	I1027 22:34:29.366363  671436 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:34:29.367113  671436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:34:29.436874  671436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-27 22:34:29.425360607 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:34:29.437131  671436 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:34:29.437432  671436 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 22:34:29.439002  671436 out.go:179] * Using Docker driver with root privileges
	I1027 22:34:29.439883  671436 cni.go:84] Creating CNI manager for ""
	I1027 22:34:29.440001  671436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:34:29.440018  671436 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:34:29.440100  671436 start.go:351] cluster config:
	{Name:kubernetes-upgrade-695499 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-695499 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:34:29.441156  671436 out.go:179] * Starting "kubernetes-upgrade-695499" primary control-plane node in "kubernetes-upgrade-695499" cluster
	I1027 22:34:29.442327  671436 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:34:29.443760  671436 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:34:29.444618  671436 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 22:34:29.444669  671436 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1027 22:34:29.444687  671436 cache.go:59] Caching tarball of preloaded images
	I1027 22:34:29.444720  671436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:34:29.444790  671436 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:34:29.444806  671436 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1027 22:34:29.444919  671436 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/config.json ...
	I1027 22:34:29.444966  671436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/config.json: {Name:mk3497c50cd3b88d50ee8a4b6f0b69e927b8bc30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.468544  671436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:34:29.468561  671436 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:34:29.468578  671436 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:34:29.468619  671436 start.go:360] acquireMachinesLock for kubernetes-upgrade-695499: {Name:mkec40f5d86362c3c0e1baba0d014c7a6178b3d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:34:29.468725  671436 start.go:364] duration metric: took 85.374µs to acquireMachinesLock for "kubernetes-upgrade-695499"
	I1027 22:34:29.468760  671436 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-695499 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-695499 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:34:29.468847  671436 start.go:125] createHost starting for "" (driver="docker")
	I1027 22:34:26.623080  666396 out.go:204]   - Generating certificates and keys ...
	I1027 22:34:26.623234  666396 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1027 22:34:26.623343  666396 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1027 22:34:26.720250  666396 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:34:26.968295  666396 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:34:27.299257  666396 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:34:27.682908  666396 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1027 22:34:27.884133  666396 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1027 22:34:27.884337  666396 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost missing-upgrade-912550] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 22:34:28.100811  666396 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1027 22:34:28.101066  666396 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost missing-upgrade-912550] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 22:34:28.207815  666396 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:34:28.428714  666396 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:34:28.521195  666396 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1027 22:34:28.521473  666396 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:34:28.794819  666396 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:34:29.186635  666396 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:34:29.440040  666396 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:34:29.563995  666396 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:34:29.564491  666396 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:34:29.570689  666396 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:34:29.572095  666396 out.go:204]   - Booting up control plane ...
	I1027 22:34:29.572222  666396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:34:29.572329  666396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:34:29.572803  666396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:34:29.584093  666396 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:34:29.585164  666396 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:34:29.585220  666396 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1027 22:34:29.680640  666396 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1027 22:34:29.261459  669685 cli_runner.go:164] Run: docker network inspect pause-067652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:34:29.283650  669685 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 22:34:29.289045  669685 kubeadm.go:884] updating cluster {Name:pause-067652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-067652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:34:29.289206  669685 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:34:29.289245  669685 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:34:29.331503  669685 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:34:29.331571  669685 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:34:29.331659  669685 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:34:29.369543  669685 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:34:29.369563  669685 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:34:29.369570  669685 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 22:34:29.369686  669685 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-067652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-067652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:34:29.369760  669685 ssh_runner.go:195] Run: crio config
	I1027 22:34:29.433998  669685 cni.go:84] Creating CNI manager for ""
	I1027 22:34:29.434024  669685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:34:29.434055  669685 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:34:29.434085  669685 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-067652 NodeName:pause-067652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:34:29.434255  669685 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-067652"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:34:29.434318  669685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:34:29.443500  669685 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:34:29.443568  669685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:34:29.451847  669685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1027 22:34:29.467051  669685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:34:29.480474  669685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1027 22:34:29.494655  669685 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:34:29.498721  669685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:34:29.643513  669685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:34:29.658999  669685 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652 for IP: 192.168.85.2
	I1027 22:34:29.659020  669685 certs.go:195] generating shared ca certs ...
	I1027 22:34:29.659041  669685 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.659219  669685 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:34:29.659290  669685 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:34:29.659304  669685 certs.go:257] generating profile certs ...
	I1027 22:34:29.659424  669685 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/client.key
	I1027 22:34:29.659494  669685 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/apiserver.key.613d8030
	I1027 22:34:29.659554  669685 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/proxy-client.key
	I1027 22:34:29.659712  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:34:29.659755  669685 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:34:29.659771  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:34:29.659803  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:34:29.659846  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:34:29.659877  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:34:29.659964  669685 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:34:29.660913  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:34:29.684540  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:34:29.709125  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:34:29.728100  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:34:29.749352  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 22:34:29.770133  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:34:29.790451  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:34:29.813088  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:34:29.835076  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:34:29.856633  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:34:29.875789  669685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:34:29.895996  669685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:34:29.913742  669685 ssh_runner.go:195] Run: openssl version
	I1027 22:34:29.923252  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:34:29.937818  669685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.942617  669685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.942677  669685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.994836  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:34:30.025584  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:34:30.038341  669685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:34:30.044371  669685 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:34:30.044440  669685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:34:30.100081  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:34:30.110343  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:34:30.120144  669685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:34:30.125109  669685 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:34:30.125181  669685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:34:30.170088  669685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:34:30.180253  669685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:34:30.184885  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:34:30.232283  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:34:30.274015  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:34:30.310067  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:34:30.346541  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:34:30.381518  669685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:34:30.417937  669685 kubeadm.go:401] StartCluster: {Name:pause-067652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-067652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:34:30.418084  669685 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:34:30.418166  669685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:34:30.450903  669685 cri.go:89] found id: "a5b7b50d4040b9768126fbb779cf62f65a7546364bfa376d804072902a378d17"
	I1027 22:34:30.450934  669685 cri.go:89] found id: "4d25924c4df6aa14a77cfaec45632997d587987e4bcaec77378f5187ae9edcd7"
	I1027 22:34:30.450940  669685 cri.go:89] found id: "7e91149f673e2c10688e02752d115bb3adcae29b62e2fd2dc44183254068dcaa"
	I1027 22:34:30.450964  669685 cri.go:89] found id: "6eb92e57262ba23c48023890df53c802fa6236f0703a6780cc2f07abf8afe516"
	I1027 22:34:30.450969  669685 cri.go:89] found id: "81f2d85ae3519636b9310c3e124b232192aee71611336d7820879bc258ccf577"
	I1027 22:34:30.450973  669685 cri.go:89] found id: "c80661f95d0b8aa17deb1a4d771ca76f6e8d600e9608fcaf7bcdd6b3d302948e"
	I1027 22:34:30.450978  669685 cri.go:89] found id: "4c2ddd4a18261619f09936f5cf6470363cfbdd6270130ee2f201206e52c924e3"
	I1027 22:34:30.450981  669685 cri.go:89] found id: ""
	I1027 22:34:30.451029  669685 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:34:30.463777  669685 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:34:30Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:34:30.463869  669685 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:34:30.473401  669685 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:34:30.473424  669685 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:34:30.473494  669685 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:34:30.482433  669685 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:30.483170  669685 kubeconfig.go:125] found "pause-067652" server: "https://192.168.85.2:8443"
	I1027 22:34:30.484373  669685 kapi.go:59] client config for pause-067652: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/client.key", CAFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:34:30.484970  669685 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 22:34:30.484997  669685 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 22:34:30.485004  669685 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 22:34:30.485010  669685 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 22:34:30.485016  669685 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 22:34:30.485436  669685 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:34:30.494886  669685 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 22:34:30.494923  669685 kubeadm.go:602] duration metric: took 21.49202ms to restartPrimaryControlPlane
	I1027 22:34:30.494934  669685 kubeadm.go:403] duration metric: took 77.010887ms to StartCluster
	I1027 22:34:30.494966  669685 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:30.495041  669685 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:34:30.496195  669685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:30.496853  669685 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:34:30.496982  669685 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:34:30.497124  669685 config.go:182] Loaded profile config "pause-067652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:34:30.498759  669685 out.go:179] * Verifying Kubernetes components...
	I1027 22:34:30.498756  669685 out.go:179] * Enabled addons: 
	I1027 22:34:30.499767  669685 addons.go:514] duration metric: took 2.797292ms for enable addons: enabled=[]
	I1027 22:34:30.499808  669685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:34:30.666203  669685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:34:30.688703  669685 node_ready.go:35] waiting up to 6m0s for node "pause-067652" to be "Ready" ...
	I1027 22:34:30.704082  669685 node_ready.go:49] node "pause-067652" is "Ready"
	I1027 22:34:30.704121  669685 node_ready.go:38] duration metric: took 15.376352ms for node "pause-067652" to be "Ready" ...
	I1027 22:34:30.704141  669685 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:34:30.704208  669685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:34:30.722123  669685 api_server.go:72] duration metric: took 225.222266ms to wait for apiserver process to appear ...
	I1027 22:34:30.722169  669685 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:34:30.722195  669685 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 22:34:30.729872  669685 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 22:34:30.731345  669685 api_server.go:141] control plane version: v1.34.1
	I1027 22:34:30.731379  669685 api_server.go:131] duration metric: took 9.198013ms to wait for apiserver health ...
	I1027 22:34:30.731390  669685 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:34:30.736407  669685 system_pods.go:59] 7 kube-system pods found
	I1027 22:34:30.736462  669685 system_pods.go:61] "coredns-66bc5c9577-b87mn" [813bc0ca-bc78-4362-9408-c9d3da00c90a] Running
	I1027 22:34:30.736474  669685 system_pods.go:61] "etcd-pause-067652" [a81dd29d-0459-4cfb-9526-da184102a77a] Running
	I1027 22:34:30.736481  669685 system_pods.go:61] "kindnet-m9bfp" [e3297a67-3f34-4a07-b21e-7bf6c8417586] Running
	I1027 22:34:30.736487  669685 system_pods.go:61] "kube-apiserver-pause-067652" [92dbf739-c2eb-4ed8-8cff-c62d557fb0c4] Running
	I1027 22:34:30.736494  669685 system_pods.go:61] "kube-controller-manager-pause-067652" [b22f0aa8-ce68-4380-945b-e2e926b86f1f] Running
	I1027 22:34:30.736500  669685 system_pods.go:61] "kube-proxy-zhh4l" [2f530998-6842-4db5-bbe1-359bdee56be3] Running
	I1027 22:34:30.736506  669685 system_pods.go:61] "kube-scheduler-pause-067652" [fc2e9923-17bc-4c0f-aaa8-de68234a9d2d] Running
	I1027 22:34:30.736515  669685 system_pods.go:74] duration metric: took 5.116524ms to wait for pod list to return data ...
	I1027 22:34:30.736529  669685 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:34:30.739287  669685 default_sa.go:45] found service account: "default"
	I1027 22:34:30.739345  669685 default_sa.go:55] duration metric: took 2.807504ms for default service account to be created ...
	I1027 22:34:30.739357  669685 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:34:30.744392  669685 system_pods.go:86] 7 kube-system pods found
	I1027 22:34:30.744424  669685 system_pods.go:89] "coredns-66bc5c9577-b87mn" [813bc0ca-bc78-4362-9408-c9d3da00c90a] Running
	I1027 22:34:30.744431  669685 system_pods.go:89] "etcd-pause-067652" [a81dd29d-0459-4cfb-9526-da184102a77a] Running
	I1027 22:34:30.744436  669685 system_pods.go:89] "kindnet-m9bfp" [e3297a67-3f34-4a07-b21e-7bf6c8417586] Running
	I1027 22:34:30.744441  669685 system_pods.go:89] "kube-apiserver-pause-067652" [92dbf739-c2eb-4ed8-8cff-c62d557fb0c4] Running
	I1027 22:34:30.744446  669685 system_pods.go:89] "kube-controller-manager-pause-067652" [b22f0aa8-ce68-4380-945b-e2e926b86f1f] Running
	I1027 22:34:30.744451  669685 system_pods.go:89] "kube-proxy-zhh4l" [2f530998-6842-4db5-bbe1-359bdee56be3] Running
	I1027 22:34:30.744456  669685 system_pods.go:89] "kube-scheduler-pause-067652" [fc2e9923-17bc-4c0f-aaa8-de68234a9d2d] Running
	I1027 22:34:30.744467  669685 system_pods.go:126] duration metric: took 5.101203ms to wait for k8s-apps to be running ...
	I1027 22:34:30.744476  669685 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:34:30.744529  669685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:34:30.764410  669685 system_svc.go:56] duration metric: took 19.904063ms WaitForService to wait for kubelet
	I1027 22:34:30.764441  669685 kubeadm.go:587] duration metric: took 267.54844ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:34:30.764463  669685 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:34:30.768730  669685 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:34:30.768836  669685 node_conditions.go:123] node cpu capacity is 8
	I1027 22:34:30.768871  669685 node_conditions.go:105] duration metric: took 4.402013ms to run NodePressure ...
	I1027 22:34:30.768888  669685 start.go:242] waiting for startup goroutines ...
	I1027 22:34:30.768898  669685 start.go:247] waiting for cluster config update ...
	I1027 22:34:30.768908  669685 start.go:256] writing updated cluster config ...
	I1027 22:34:30.769470  669685 ssh_runner.go:195] Run: rm -f paused
	I1027 22:34:30.775007  669685 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:34:30.776084  669685 kapi.go:59] client config for pause-067652: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/pause-067652/client.key", CAFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:34:30.780239  669685 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b87mn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.786443  669685 pod_ready.go:94] pod "coredns-66bc5c9577-b87mn" is "Ready"
	I1027 22:34:30.786471  669685 pod_ready.go:86] duration metric: took 6.203849ms for pod "coredns-66bc5c9577-b87mn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.788720  669685 pod_ready.go:83] waiting for pod "etcd-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.794297  669685 pod_ready.go:94] pod "etcd-pause-067652" is "Ready"
	I1027 22:34:30.794326  669685 pod_ready.go:86] duration metric: took 5.579931ms for pod "etcd-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.796976  669685 pod_ready.go:83] waiting for pod "kube-apiserver-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.801693  669685 pod_ready.go:94] pod "kube-apiserver-pause-067652" is "Ready"
	I1027 22:34:30.801717  669685 pod_ready.go:86] duration metric: took 4.661466ms for pod "kube-apiserver-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:30.804169  669685 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:28.671612  667713 cli_runner.go:164] Run: docker network inspect stopped-upgrade-126023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:34:28.689516  667713 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1027 22:34:28.694548  667713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:34:28.706965  667713 kubeadm.go:884] updating cluster {Name:stopped-upgrade-126023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-126023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:34:28.707106  667713 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1027 22:34:28.707176  667713 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:34:28.754884  667713 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:34:28.754909  667713 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:34:28.754987  667713 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:34:28.793608  667713 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:34:28.793633  667713 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:34:28.793642  667713 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.3 crio true true} ...
	I1027 22:34:28.793770  667713 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-126023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-126023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:34:28.793848  667713 ssh_runner.go:195] Run: crio config
	I1027 22:34:28.844764  667713 cni.go:84] Creating CNI manager for ""
	I1027 22:34:28.844790  667713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:34:28.844818  667713 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:34:28.844847  667713 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-126023 NodeName:stopped-upgrade-126023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:34:28.845044  667713 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-126023"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:34:28.845129  667713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1027 22:34:28.856340  667713 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:34:28.856421  667713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:34:28.872625  667713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1027 22:34:28.891933  667713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:34:28.910069  667713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1027 22:34:28.929276  667713 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:34:28.932824  667713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:34:28.945289  667713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:34:29.027854  667713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:34:29.042834  667713 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023 for IP: 192.168.103.2
	I1027 22:34:29.042857  667713 certs.go:195] generating shared ca certs ...
	I1027 22:34:29.042877  667713 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.043055  667713 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:34:29.043119  667713 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:34:29.043134  667713 certs.go:257] generating profile certs ...
	I1027 22:34:29.043238  667713 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/client.key
	I1027 22:34:29.043269  667713 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key.b70c58af
	I1027 22:34:29.043285  667713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt.b70c58af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1027 22:34:29.233790  667713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt.b70c58af ...
	I1027 22:34:29.233815  667713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt.b70c58af: {Name:mk51f0c0519e3f54a9207eb40d44ba11d1b909c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.234018  667713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key.b70c58af ...
	I1027 22:34:29.234040  667713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key.b70c58af: {Name:mk54efeb4f8aaab6a5119df80cf96cd4d4dfcdc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.234158  667713 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt.b70c58af -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt
	I1027 22:34:29.234341  667713 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key.b70c58af -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key
	I1027 22:34:29.234532  667713 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/proxy-client.key
	I1027 22:34:29.234680  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:34:29.234719  667713 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:34:29.234729  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:34:29.234766  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:34:29.234800  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:34:29.234828  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:34:29.234881  667713 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:34:29.235633  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:34:29.265300  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:34:29.296990  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:34:29.329858  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:34:29.360041  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 22:34:29.392916  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:34:29.426593  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:34:29.456299  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:34:29.484406  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:34:29.512814  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:34:29.548042  667713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:34:29.581928  667713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:34:29.602978  667713 ssh_runner.go:195] Run: openssl version
	I1027 22:34:29.610021  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:34:29.624015  667713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:34:29.628003  667713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:34:29.628059  667713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:34:29.635960  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:34:29.646802  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:34:29.657837  667713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:34:29.662213  667713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:34:29.662280  667713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:34:29.671358  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:34:29.682441  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:34:29.696074  667713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.700428  667713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.700489  667713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:34:29.710977  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:34:29.722730  667713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:34:29.726389  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:34:29.735037  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:34:29.742918  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:34:29.750233  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:34:29.757527  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:34:29.765802  667713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:34:29.773092  667713 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-126023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-126023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:34:29.773177  667713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:34:29.773228  667713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:34:29.814731  667713 cri.go:89] found id: ""
	I1027 22:34:29.814796  667713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W1027 22:34:29.826811  667713 kubeadm.go:414] apiserver tunnel failed: apiserver port not set
	I1027 22:34:29.826830  667713 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:34:29.826834  667713 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:34:29.826893  667713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:34:29.837872  667713 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:29.838748  667713 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-126023" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:34:29.839174  667713 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-482142/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-126023" cluster setting kubeconfig missing "stopped-upgrade-126023" context setting]
	I1027 22:34:29.839766  667713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:29.840709  667713 kapi.go:59] client config for stopped-upgrade-126023: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/client.key", CAFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:34:29.841365  667713 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 22:34:29.841391  667713 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 22:34:29.841405  667713 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 22:34:29.841417  667713 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 22:34:29.841424  667713 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 22:34:29.841918  667713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:34:29.853128  667713 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-27 22:34:06.070695325 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-27 22:34:28.926158516 +0000
	@@ -50,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: systemd
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I1027 22:34:29.853146  667713 kubeadm.go:1161] stopping kube-system containers ...
	I1027 22:34:29.853160  667713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1027 22:34:29.853208  667713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:34:29.895272  667713 cri.go:89] found id: ""
	I1027 22:34:29.895347  667713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1027 22:34:29.928571  667713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:34:29.939148  667713 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Oct 27 22:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct 27 22:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 27 22:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 27 22:34 /etc/kubernetes/scheduler.conf
	
	I1027 22:34:29.939224  667713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1027 22:34:29.951145  667713 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:29.951223  667713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:34:29.963389  667713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1027 22:34:29.975545  667713 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:29.975607  667713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:34:29.986293  667713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1027 22:34:29.997517  667713 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:29.997588  667713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:34:30.026825  667713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1027 22:34:30.040514  667713 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:34:30.040576  667713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:34:30.057800  667713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:34:30.070718  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:30.130595  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:30.933854  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:31.112785  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:31.192360  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:31.269819  667713 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:34:31.269908  667713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:34:31.770101  667713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:34:31.180881  669685 pod_ready.go:94] pod "kube-controller-manager-pause-067652" is "Ready"
	I1027 22:34:31.180923  669685 pod_ready.go:86] duration metric: took 376.728008ms for pod "kube-controller-manager-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:31.380532  669685 pod_ready.go:83] waiting for pod "kube-proxy-zhh4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:31.780439  669685 pod_ready.go:94] pod "kube-proxy-zhh4l" is "Ready"
	I1027 22:34:31.780476  669685 pod_ready.go:86] duration metric: took 399.914858ms for pod "kube-proxy-zhh4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:31.980972  669685 pod_ready.go:83] waiting for pod "kube-scheduler-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:32.379936  669685 pod_ready.go:94] pod "kube-scheduler-pause-067652" is "Ready"
	I1027 22:34:32.379985  669685 pod_ready.go:86] duration metric: took 398.980763ms for pod "kube-scheduler-pause-067652" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:34:32.380002  669685 pod_ready.go:40] duration metric: took 1.604936403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:34:32.456477  669685 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:34:32.460142  669685 out.go:179] * Done! kubectl is now configured to use "pause-067652" cluster and "default" namespace by default
	I1027 22:34:29.472990  671436 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:34:29.473256  671436 start.go:159] libmachine.API.Create for "kubernetes-upgrade-695499" (driver="docker")
	I1027 22:34:29.473289  671436 client.go:173] LocalClient.Create starting
	I1027 22:34:29.473392  671436 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 22:34:29.473431  671436 main.go:143] libmachine: Decoding PEM data...
	I1027 22:34:29.473465  671436 main.go:143] libmachine: Parsing certificate...
	I1027 22:34:29.473547  671436 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 22:34:29.473571  671436 main.go:143] libmachine: Decoding PEM data...
	I1027 22:34:29.473583  671436 main.go:143] libmachine: Parsing certificate...
	I1027 22:34:29.474027  671436 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-695499 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:34:29.493560  671436 cli_runner.go:211] docker network inspect kubernetes-upgrade-695499 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:34:29.493631  671436 network_create.go:284] running [docker network inspect kubernetes-upgrade-695499] to gather additional debugging logs...
	I1027 22:34:29.493653  671436 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-695499
	W1027 22:34:29.511680  671436 cli_runner.go:211] docker network inspect kubernetes-upgrade-695499 returned with exit code 1
	I1027 22:34:29.511744  671436 network_create.go:287] error running [docker network inspect kubernetes-upgrade-695499]: docker network inspect kubernetes-upgrade-695499: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-695499 not found
	I1027 22:34:29.511773  671436 network_create.go:289] output of [docker network inspect kubernetes-upgrade-695499]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-695499 not found
	
	** /stderr **
	I1027 22:34:29.511888  671436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:34:29.531112  671436 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
	I1027 22:34:29.532204  671436 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2deffb37428 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:63:99:4f:c9:29} reservation:<nil>}
	I1027 22:34:29.532833  671436 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8aa1ad217c0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:19:7b:f4:de:20} reservation:<nil>}
	I1027 22:34:29.534057  671436 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d7ff60}
	I1027 22:34:29.534101  671436 network_create.go:124] attempt to create docker network kubernetes-upgrade-695499 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 22:34:29.534184  671436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-695499 kubernetes-upgrade-695499
	I1027 22:34:29.608963  671436 network_create.go:108] docker network kubernetes-upgrade-695499 192.168.76.0/24 created
	I1027 22:34:29.609002  671436 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-695499" container
	I1027 22:34:29.609077  671436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:34:29.631844  671436 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-695499 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-695499 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:34:29.652007  671436 oci.go:103] Successfully created a docker volume kubernetes-upgrade-695499
	I1027 22:34:29.652138  671436 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-695499-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-695499 --entrypoint /usr/bin/test -v kubernetes-upgrade-695499:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:34:30.077102  671436 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-695499
	I1027 22:34:30.077146  671436 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 22:34:30.077171  671436 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:34:30.077261  671436 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-695499:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:34:32.272078  667713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:34:32.290461  667713 api_server.go:72] duration metric: took 1.020648913s to wait for apiserver process to appear ...
	I1027 22:34:32.290491  667713 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:34:32.290518  667713 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 22:34:34.652852  667713 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:34:34.652886  667713 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:34:34.652905  667713 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 22:34:34.667480  667713 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:34:34.667590  667713 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:34:34.790894  667713 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 22:34:34.828017  667713 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1027 22:34:34.828053  667713 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1027 22:34:35.292089  667713 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 22:34:35.297547  667713 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1027 22:34:35.297579  667713 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1027 22:34:35.790638  667713 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 22:34:35.795761  667713 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1027 22:34:35.795804  667713 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1027 22:34:36.291092  667713 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 22:34:36.296602  667713 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1027 22:34:36.305884  667713 api_server.go:141] control plane version: v1.28.3
	I1027 22:34:36.305918  667713 api_server.go:131] duration metric: took 4.015419448s to wait for apiserver health ...
	I1027 22:34:36.305929  667713 cni.go:84] Creating CNI manager for ""
	I1027 22:34:36.305937  667713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:34:36.307855  667713 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:34:36.309888  667713 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:34:36.316161  667713 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1027 22:34:36.316189  667713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:34:36.343441  667713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:34:37.190201  667713 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:34:37.194346  667713 system_pods.go:59] 5 kube-system pods found
	I1027 22:34:37.194388  667713 system_pods.go:61] "etcd-stopped-upgrade-126023" [651d95e7-1e73-483e-b15b-9363ff94b3f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:34:37.194400  667713 system_pods.go:61] "kube-apiserver-stopped-upgrade-126023" [61ff2442-1112-4287-901d-1b721caef98b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:34:37.194417  667713 system_pods.go:61] "kube-controller-manager-stopped-upgrade-126023" [d6b0c043-0aff-480b-9e84-74871394c54e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:34:37.194432  667713 system_pods.go:61] "kube-scheduler-stopped-upgrade-126023" [3a813c2f-2d8a-41b9-8df1-d9aaec2a6161] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:34:37.194448  667713 system_pods.go:61] "storage-provisioner" [4880692b-abb3-4014-95fd-103272c47e0f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1027 22:34:37.194462  667713 system_pods.go:74] duration metric: took 4.237631ms to wait for pod list to return data ...
	I1027 22:34:37.194477  667713 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:34:37.197108  667713 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:34:37.197141  667713 node_conditions.go:123] node cpu capacity is 8
	I1027 22:34:37.197155  667713 node_conditions.go:105] duration metric: took 2.67288ms to run NodePressure ...
	I1027 22:34:37.197205  667713 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:34:37.364452  667713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:34:37.373194  667713 ops.go:34] apiserver oom_adj: -16
	I1027 22:34:37.373214  667713 kubeadm.go:602] duration metric: took 7.54637334s to restartPrimaryControlPlane
	I1027 22:34:37.373224  667713 kubeadm.go:403] duration metric: took 7.600141258s to StartCluster
	I1027 22:34:37.373241  667713 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:37.373312  667713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:34:37.374414  667713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:34:37.374683  667713 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:34:37.374748  667713 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:34:37.374872  667713 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-126023"
	I1027 22:34:37.374896  667713 addons.go:238] Setting addon storage-provisioner=true in "stopped-upgrade-126023"
	W1027 22:34:37.374906  667713 addons.go:247] addon storage-provisioner should already be in state true
	I1027 22:34:37.374908  667713 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-126023"
	I1027 22:34:37.374939  667713 config.go:182] Loaded profile config "stopped-upgrade-126023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1027 22:34:37.374964  667713 host.go:66] Checking if "stopped-upgrade-126023" exists ...
	I1027 22:34:37.374964  667713 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-126023"
	I1027 22:34:37.375317  667713 cli_runner.go:164] Run: docker container inspect stopped-upgrade-126023 --format={{.State.Status}}
	I1027 22:34:37.375513  667713 cli_runner.go:164] Run: docker container inspect stopped-upgrade-126023 --format={{.State.Status}}
	I1027 22:34:37.379460  667713 out.go:179] * Verifying Kubernetes components...
	I1027 22:34:37.380634  667713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:34:37.396023  667713 kapi.go:59] client config for stopped-upgrade-126023: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/stopped-upgrade-126023/client.key", CAFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:34:37.396397  667713 addons.go:238] Setting addon default-storageclass=true in "stopped-upgrade-126023"
	W1027 22:34:37.396413  667713 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:34:37.396437  667713 host.go:66] Checking if "stopped-upgrade-126023" exists ...
	I1027 22:34:37.396846  667713 cli_runner.go:164] Run: docker container inspect stopped-upgrade-126023 --format={{.State.Status}}
	I1027 22:34:37.397610  667713 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:34:36.684448  666396 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003492 seconds
	I1027 22:34:36.684578  666396 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:34:36.705982  666396 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:34:37.238999  666396 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:34:37.239190  666396 kubeadm.go:322] [mark-control-plane] Marking the node missing-upgrade-912550 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:34:37.750060  666396 kubeadm.go:322] [bootstrap-token] Using token: 6e4wc1.jjkyoyhnrxc7m6gn
	I1027 22:34:37.398773  667713 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:34:37.398792  667713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:34:37.398845  667713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-126023
	I1027 22:34:37.422144  667713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/stopped-upgrade-126023/id_rsa Username:docker}
	I1027 22:34:37.422297  667713 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:34:37.422867  667713 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:34:37.423086  667713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-126023
	I1027 22:34:37.447481  667713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/stopped-upgrade-126023/id_rsa Username:docker}
	I1027 22:34:37.504632  667713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:34:37.520101  667713 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:34:37.520174  667713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:34:37.532262  667713 api_server.go:72] duration metric: took 157.538743ms to wait for apiserver process to appear ...
	I1027 22:34:37.532290  667713 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:34:37.532311  667713 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 22:34:37.533766  667713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:34:37.539145  667713 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1027 22:34:37.540638  667713 api_server.go:141] control plane version: v1.28.3
	I1027 22:34:37.540664  667713 api_server.go:131] duration metric: took 8.366179ms to wait for apiserver health ...
	I1027 22:34:37.540676  667713 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:34:37.544373  667713 system_pods.go:59] 5 kube-system pods found
	I1027 22:34:37.544420  667713 system_pods.go:61] "etcd-stopped-upgrade-126023" [651d95e7-1e73-483e-b15b-9363ff94b3f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:34:37.544438  667713 system_pods.go:61] "kube-apiserver-stopped-upgrade-126023" [61ff2442-1112-4287-901d-1b721caef98b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:34:37.544464  667713 system_pods.go:61] "kube-controller-manager-stopped-upgrade-126023" [d6b0c043-0aff-480b-9e84-74871394c54e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:34:37.544488  667713 system_pods.go:61] "kube-scheduler-stopped-upgrade-126023" [3a813c2f-2d8a-41b9-8df1-d9aaec2a6161] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:34:37.544496  667713 system_pods.go:61] "storage-provisioner" [4880692b-abb3-4014-95fd-103272c47e0f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1027 22:34:37.544504  667713 system_pods.go:74] duration metric: took 3.819975ms to wait for pod list to return data ...
	I1027 22:34:37.544522  667713 kubeadm.go:587] duration metric: took 169.805934ms to wait for: map[apiserver:true system_pods:true]
	I1027 22:34:37.544544  667713 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:34:37.547483  667713 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:34:37.547510  667713 node_conditions.go:123] node cpu capacity is 8
	I1027 22:34:37.547527  667713 node_conditions.go:105] duration metric: took 2.977398ms to run NodePressure ...
	I1027 22:34:37.547543  667713 start.go:242] waiting for startup goroutines ...
	I1027 22:34:37.554882  667713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:34:37.920629  667713 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 22:34:37.921707  667713 addons.go:514] duration metric: took 546.961039ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 22:34:37.921748  667713 start.go:247] waiting for cluster config update ...
	I1027 22:34:37.921761  667713 start.go:256] writing updated cluster config ...
	I1027 22:34:37.922041  667713 ssh_runner.go:195] Run: rm -f paused
	I1027 22:34:37.979482  667713 start.go:626] kubectl: 1.34.1, cluster: 1.28.3 (minor skew: 6)
	I1027 22:34:37.980963  667713 out.go:203] 
	W1027 22:34:37.982044  667713 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.3.
	I1027 22:34:37.983129  667713 out.go:179]   - Want kubectl v1.28.3? Try 'minikube kubectl -- get pods -A'
	I1027 22:34:37.984251  667713 out.go:179] * Done! kubectl is now configured to use "stopped-upgrade-126023" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.093442597Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.094300047Z" level=info msg="Conmon does support the --sync option"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.094326139Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.094339107Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.095046827Z" level=info msg="Conmon does support the --sync option"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.095060397Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.099281943Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.099304025Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.099916972Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.100424904Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.100484472Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.106862533Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.151051056Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-b87mn Namespace:kube-system ID:991950f5e1a656a6f60eaadc37fe143f3dd6312ea3afa1fa0e68ea6ce86df079 UID:813bc0ca-bc78-4362-9408-c9d3da00c90a NetNS:/var/run/netns/837eb9bb-d938-4f05-9ad9-bbf3727b8bf1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004a4470}] Aliases:map[]}"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.151326459Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-b87mn for CNI network kindnet (type=ptp)"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.151836789Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.151863441Z" level=info msg="Starting seccomp notifier watcher"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.151918678Z" level=info msg="Create NRI interface"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152060685Z" level=info msg="built-in NRI default validator is disabled"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152078114Z" level=info msg="runtime interface created"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152095547Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152104407Z" level=info msg="runtime interface starting up..."
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152113546Z" level=info msg="starting plugins..."
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152140924Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 27 22:34:29 pause-067652 crio[2118]: time="2025-10-27T22:34:29.152595254Z" level=info msg="No systemd watchdog enabled"
	Oct 27 22:34:29 pause-067652 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a5b7b50d4040b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   991950f5e1a65       coredns-66bc5c9577-b87mn               kube-system
	4d25924c4df6a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   26 seconds ago      Running             kube-proxy                0                   389be2b814e33       kube-proxy-zhh4l                       kube-system
	7e91149f673e2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   26 seconds ago      Running             kindnet-cni               0                   25054d9b44d58       kindnet-m9bfp                          kube-system
	6eb92e57262ba       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago      Running             kube-scheduler            0                   f88cb9ca3c2a4       kube-scheduler-pause-067652            kube-system
	81f2d85ae3519       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago      Running             kube-apiserver            0                   03f80a084c9c1       kube-apiserver-pause-067652            kube-system
	c80661f95d0b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Running             etcd                      0                   87ccd61347299       etcd-pause-067652                      kube-system
	4c2ddd4a18261       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago      Running             kube-controller-manager   0                   f8452ad9be944       kube-controller-manager-pause-067652   kube-system
	
	
	==> coredns [a5b7b50d4040b9768126fbb779cf62f65a7546364bfa376d804072902a378d17] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56539 - 56226 "HINFO IN 1511707238290419606.4793965326947131984. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029915802s
	
	
	==> describe nodes <==
	Name:               pause-067652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-067652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=pause-067652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_34_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:34:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-067652
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:34:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:34:22 +0000   Mon, 27 Oct 2025 22:34:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:34:22 +0000   Mon, 27 Oct 2025 22:34:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:34:22 +0000   Mon, 27 Oct 2025 22:34:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:34:22 +0000   Mon, 27 Oct 2025 22:34:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-067652
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                4fd845e8-f765-4510-addc-0aac115564fd
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-b87mn                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-067652                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-m9bfp                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-067652             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-067652    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-zhh4l                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-067652             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node pause-067652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node pause-067652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node pause-067652 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node pause-067652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node pause-067652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node pause-067652 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node pause-067652 event: Registered Node pause-067652 in Controller
	  Normal  NodeReady                16s                kubelet          Node pause-067652 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [c80661f95d0b8aa17deb1a4d771ca76f6e8d600e9608fcaf7bcdd6b3d302948e] <==
	{"level":"warn","ts":"2025-10-27T22:34:02.496448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.509903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.523797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.539812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.554964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.574766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.583523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.612804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.616518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.628609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.650509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.662338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.677567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.686539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.695874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.706257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.719229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.735861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.763533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.772647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.821519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.825790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.836450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:34:02.903201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45914","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T22:34:19.041218Z","caller":"traceutil/trace.go:172","msg":"trace[2125268942] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"116.286183ms","start":"2025-10-27T22:34:18.924907Z","end":"2025-10-27T22:34:19.041193Z","steps":["trace[2125268942] 'process raft request'  (duration: 95.991591ms)","trace[2125268942] 'compare'  (duration: 20.125428ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:34:38 up  2:16,  0 user,  load average: 2.84, 1.45, 2.46
	Linux pause-067652 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e91149f673e2c10688e02752d115bb3adcae29b62e2fd2dc44183254068dcaa] <==
	I1027 22:34:12.209814       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:34:12.210248       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 22:34:12.210468       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:34:12.210492       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:34:12.210523       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:34:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:34:12.506546       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:34:12.506579       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:34:12.506591       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:34:12.506803       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:34:13.007805       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:34:13.007928       1 metrics.go:72] Registering metrics
	I1027 22:34:13.008069       1 controller.go:711] "Syncing nftables rules"
	I1027 22:34:22.417397       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:34:22.417447       1 main.go:301] handling current node
	I1027 22:34:32.421027       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:34:32.421069       1 main.go:301] handling current node
	
	
	==> kube-apiserver [81f2d85ae3519636b9310c3e124b232192aee71611336d7820879bc258ccf577] <==
	I1027 22:34:03.647914       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1027 22:34:03.648407       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 22:34:03.648642       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 22:34:03.656331       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:34:03.668238       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 22:34:03.681324       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 22:34:03.685749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:34:03.693729       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:34:04.554228       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 22:34:04.559395       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 22:34:04.559428       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:34:05.181235       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:34:05.219401       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:34:05.264713       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 22:34:05.279491       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 22:34:05.281135       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:34:05.287289       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:34:05.612774       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:34:06.205590       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:34:06.219103       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 22:34:06.228399       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:34:11.386562       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:34:11.406311       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:34:11.458020       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 22:34:11.525863       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4c2ddd4a18261619f09936f5cf6470363cfbdd6270130ee2f201206e52c924e3] <==
	I1027 22:34:10.611487       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 22:34:10.611627       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:34:10.611762       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 22:34:10.611879       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:34:10.612511       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 22:34:10.612550       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:34:10.613239       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 22:34:10.613283       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 22:34:10.613441       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 22:34:10.614773       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:34:10.614796       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:34:10.615389       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:34:10.617473       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 22:34:10.621091       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:34:10.623294       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 22:34:10.623378       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:34:10.623448       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:34:10.623457       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:34:10.623466       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:34:10.626655       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:34:10.629826       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:34:10.633316       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-067652" podCIDRs=["10.244.0.0/24"]
	I1027 22:34:10.637001       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 22:34:10.639387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:34:25.613355       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4d25924c4df6aa14a77cfaec45632997d587987e4bcaec77378f5187ae9edcd7] <==
	I1027 22:34:12.006628       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:34:12.078577       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:34:12.179763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:34:12.179834       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 22:34:12.179992       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:34:12.208391       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:34:12.208476       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:34:12.216137       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:34:12.217106       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:34:12.217216       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:34:12.219036       1 config.go:200] "Starting service config controller"
	I1027 22:34:12.219054       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:34:12.219144       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:34:12.219189       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:34:12.219234       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:34:12.219240       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:34:12.219522       1 config.go:309] "Starting node config controller"
	I1027 22:34:12.219796       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:34:12.219866       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:34:12.319474       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:34:12.319501       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:34:12.319513       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6eb92e57262ba23c48023890df53c802fa6236f0703a6780cc2f07abf8afe516] <==
	E1027 22:34:03.662038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:34:03.662145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:34:03.662412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:34:03.662772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:34:03.667976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:34:03.668144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:34:03.668232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:34:03.668273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:34:03.668321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:34:03.668381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:34:03.668454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:34:03.669314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:34:03.669498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:34:03.669619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:34:03.669678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:34:03.669703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:34:04.578668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:34:04.605222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:34:04.731737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:34:04.837781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:34:04.838674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:34:04.863208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:34:04.875501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 22:34:04.881102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1027 22:34:07.151220       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:34:07 pause-067652 kubelet[1279]: E1027 22:34:07.090909    1279 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-067652\" already exists" pod="kube-system/etcd-pause-067652"
	Oct 27 22:34:07 pause-067652 kubelet[1279]: I1027 22:34:07.109767    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-067652" podStartSLOduration=1.109742653 podStartE2EDuration="1.109742653s" podCreationTimestamp="2025-10-27 22:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:07.108936189 +0000 UTC m=+1.137565925" watchObservedRunningTime="2025-10-27 22:34:07.109742653 +0000 UTC m=+1.138372382"
	Oct 27 22:34:07 pause-067652 kubelet[1279]: I1027 22:34:07.146821    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-067652" podStartSLOduration=1.146797789 podStartE2EDuration="1.146797789s" podCreationTimestamp="2025-10-27 22:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:07.128499432 +0000 UTC m=+1.157129157" watchObservedRunningTime="2025-10-27 22:34:07.146797789 +0000 UTC m=+1.175427521"
	Oct 27 22:34:07 pause-067652 kubelet[1279]: I1027 22:34:07.176378    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-067652" podStartSLOduration=1.176355142 podStartE2EDuration="1.176355142s" podCreationTimestamp="2025-10-27 22:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:07.176264965 +0000 UTC m=+1.204894699" watchObservedRunningTime="2025-10-27 22:34:07.176355142 +0000 UTC m=+1.204984876"
	Oct 27 22:34:07 pause-067652 kubelet[1279]: I1027 22:34:07.176532    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-067652" podStartSLOduration=2.176523478 podStartE2EDuration="2.176523478s" podCreationTimestamp="2025-10-27 22:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:07.151671492 +0000 UTC m=+1.180301236" watchObservedRunningTime="2025-10-27 22:34:07.176523478 +0000 UTC m=+1.205153215"
	Oct 27 22:34:10 pause-067652 kubelet[1279]: I1027 22:34:10.704075    1279 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 22:34:10 pause-067652 kubelet[1279]: I1027 22:34:10.704826    1279 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588474    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e3297a67-3f34-4a07-b21e-7bf6c8417586-cni-cfg\") pod \"kindnet-m9bfp\" (UID: \"e3297a67-3f34-4a07-b21e-7bf6c8417586\") " pod="kube-system/kindnet-m9bfp"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588537    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtkt\" (UniqueName: \"kubernetes.io/projected/e3297a67-3f34-4a07-b21e-7bf6c8417586-kube-api-access-6gtkt\") pod \"kindnet-m9bfp\" (UID: \"e3297a67-3f34-4a07-b21e-7bf6c8417586\") " pod="kube-system/kindnet-m9bfp"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588612    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f530998-6842-4db5-bbe1-359bdee56be3-xtables-lock\") pod \"kube-proxy-zhh4l\" (UID: \"2f530998-6842-4db5-bbe1-359bdee56be3\") " pod="kube-system/kube-proxy-zhh4l"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588636    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3297a67-3f34-4a07-b21e-7bf6c8417586-lib-modules\") pod \"kindnet-m9bfp\" (UID: \"e3297a67-3f34-4a07-b21e-7bf6c8417586\") " pod="kube-system/kindnet-m9bfp"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588664    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f530998-6842-4db5-bbe1-359bdee56be3-kube-proxy\") pod \"kube-proxy-zhh4l\" (UID: \"2f530998-6842-4db5-bbe1-359bdee56be3\") " pod="kube-system/kube-proxy-zhh4l"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588693    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2g4l\" (UniqueName: \"kubernetes.io/projected/2f530998-6842-4db5-bbe1-359bdee56be3-kube-api-access-t2g4l\") pod \"kube-proxy-zhh4l\" (UID: \"2f530998-6842-4db5-bbe1-359bdee56be3\") " pod="kube-system/kube-proxy-zhh4l"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588713    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3297a67-3f34-4a07-b21e-7bf6c8417586-xtables-lock\") pod \"kindnet-m9bfp\" (UID: \"e3297a67-3f34-4a07-b21e-7bf6c8417586\") " pod="kube-system/kindnet-m9bfp"
	Oct 27 22:34:11 pause-067652 kubelet[1279]: I1027 22:34:11.588741    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f530998-6842-4db5-bbe1-359bdee56be3-lib-modules\") pod \"kube-proxy-zhh4l\" (UID: \"2f530998-6842-4db5-bbe1-359bdee56be3\") " pod="kube-system/kube-proxy-zhh4l"
	Oct 27 22:34:12 pause-067652 kubelet[1279]: I1027 22:34:12.113703    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m9bfp" podStartSLOduration=1.113677811 podStartE2EDuration="1.113677811s" podCreationTimestamp="2025-10-27 22:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:12.112823802 +0000 UTC m=+6.141453537" watchObservedRunningTime="2025-10-27 22:34:12.113677811 +0000 UTC m=+6.142307544"
	Oct 27 22:34:12 pause-067652 kubelet[1279]: I1027 22:34:12.146492    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zhh4l" podStartSLOduration=1.146460882 podStartE2EDuration="1.146460882s" podCreationTimestamp="2025-10-27 22:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:12.146339754 +0000 UTC m=+6.174969488" watchObservedRunningTime="2025-10-27 22:34:12.146460882 +0000 UTC m=+6.175090613"
	Oct 27 22:34:22 pause-067652 kubelet[1279]: I1027 22:34:22.824004    1279 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 22:34:22 pause-067652 kubelet[1279]: I1027 22:34:22.975917    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/813bc0ca-bc78-4362-9408-c9d3da00c90a-config-volume\") pod \"coredns-66bc5c9577-b87mn\" (UID: \"813bc0ca-bc78-4362-9408-c9d3da00c90a\") " pod="kube-system/coredns-66bc5c9577-b87mn"
	Oct 27 22:34:22 pause-067652 kubelet[1279]: I1027 22:34:22.975993    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f25zm\" (UniqueName: \"kubernetes.io/projected/813bc0ca-bc78-4362-9408-c9d3da00c90a-kube-api-access-f25zm\") pod \"coredns-66bc5c9577-b87mn\" (UID: \"813bc0ca-bc78-4362-9408-c9d3da00c90a\") " pod="kube-system/coredns-66bc5c9577-b87mn"
	Oct 27 22:34:24 pause-067652 kubelet[1279]: I1027 22:34:24.146688    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b87mn" podStartSLOduration=13.146665769 podStartE2EDuration="13.146665769s" podCreationTimestamp="2025-10-27 22:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:34:24.146146545 +0000 UTC m=+18.174776278" watchObservedRunningTime="2025-10-27 22:34:24.146665769 +0000 UTC m=+18.175295501"
	Oct 27 22:34:33 pause-067652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:34:33 pause-067652 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:34:33 pause-067652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 22:34:33 pause-067652 systemd[1]: kubelet.service: Consumed 1.238s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-067652 -n pause-067652
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-067652 -n pause-067652: exit status 2 (423.057898ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-067652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-908589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-908589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (245.806298ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:37:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-908589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-908589 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-908589 describe deploy/metrics-server -n kube-system: exit status 1 (61.628202ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-908589 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-908589
helpers_test.go:243: (dbg) docker inspect old-k8s-version-908589:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b",
	        "Created": "2025-10-27T22:36:26.560709331Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702073,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:36:26.597247169Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/hosts",
	        "LogPath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b-json.log",
	        "Name": "/old-k8s-version-908589",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-908589:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-908589",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b",
	                "LowerDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-908589",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-908589/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-908589",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-908589",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-908589",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1186d80a3edc90792b67b7c6d9a7ad90cc57cfef526e7b252f789289f2cf6129",
	            "SandboxKey": "/var/run/docker/netns/1186d80a3edc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-908589": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:f6:a2:ce:f2:9d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "798a573c50beee8a1800510c05b8fefb38677fa31ecba8e611494c61259bbf2b",
	                    "EndpointID": "0cebedac64f835d36ac01a55436e9c3e2599e26c9467637da75c409526d4fd57",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-908589",
	                        "2d571bec60f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908589 -n old-k8s-version-908589
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-908589 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-908589 logs -n 25: (1.116618255s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-293335 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo containerd config dump                                                                                                                                                                                                  │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo crio config                                                                                                                                                                                                             │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ delete  │ -p cilium-293335                                                                                                                                                                                                                              │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ delete  │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ ssh     │ -p NoKubernetes-565903 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ stop    │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-565903 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:37 UTC │
	│ ssh     │ -p NoKubernetes-565903 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ delete  │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-908589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:37:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:37:07.966369  711813 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:37:07.966857  711813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:37:07.966883  711813 out.go:374] Setting ErrFile to fd 2...
	I1027 22:37:07.966890  711813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:37:07.967358  711813 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:37:07.968283  711813 out.go:368] Setting JSON to false
	I1027 22:37:07.969391  711813 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8367,"bootTime":1761596261,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:37:07.969490  711813 start.go:143] virtualization: kvm guest
	I1027 22:37:07.971382  711813 out.go:179] * [no-preload-188814] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:37:07.972859  711813 notify.go:221] Checking for updates...
	I1027 22:37:07.972877  711813 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:37:07.973957  711813 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:37:07.975112  711813 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:37:07.976225  711813 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:37:07.977251  711813 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:37:07.978345  711813 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:37:07.979817  711813 config.go:182] Loaded profile config "cert-expiration-219241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:37:07.979938  711813 config.go:182] Loaded profile config "kubernetes-upgrade-695499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:37:07.980075  711813 config.go:182] Loaded profile config "old-k8s-version-908589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 22:37:07.980180  711813 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:37:08.005768  711813 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:37:08.005852  711813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:37:08.070425  711813 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 22:37:08.060357252 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:37:08.070542  711813 docker.go:318] overlay module found
	I1027 22:37:08.072627  711813 out.go:179] * Using the docker driver based on user configuration
	I1027 22:37:08.073615  711813 start.go:307] selected driver: docker
	I1027 22:37:08.073631  711813 start.go:928] validating driver "docker" against <nil>
	I1027 22:37:08.073655  711813 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:37:08.074352  711813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:37:08.130203  711813 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 22:37:08.120252711 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:37:08.130375  711813 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:37:08.130611  711813 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:37:08.131973  711813 out.go:179] * Using Docker driver with root privileges
	I1027 22:37:08.132880  711813 cni.go:84] Creating CNI manager for ""
	I1027 22:37:08.132957  711813 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:37:08.132973  711813 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:37:08.133041  711813 start.go:351] cluster config:
	{Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:37:08.134062  711813 out.go:179] * Starting "no-preload-188814" primary control-plane node in "no-preload-188814" cluster
	I1027 22:37:08.134980  711813 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:37:08.135917  711813 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:37:08.136809  711813 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:37:08.136901  711813 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:37:08.136927  711813 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/config.json ...
	I1027 22:37:08.136984  711813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/config.json: {Name:mk206c73c104675b03fee07e9d86cee3a8639a5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:37:08.137137  711813 cache.go:107] acquiring lock: {Name:mk07939a87c1b452f98e2733b4044aaef5b7beb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:37:08.137139  711813 cache.go:107] acquiring lock: {Name:mke466d23cdbe7dd8079b566141851102bac577e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:37:08.137175  711813 cache.go:107] acquiring lock: {Name:mk8b6b09ba52dfb608da0a36c4ec3530523b8436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:37:08.137150  711813 cache.go:107] acquiring lock: {Name:mk200c8a2caaaad3c8ed76649a48f615a1ae5be9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:37:08.137212  711813 cache.go:107] acquiring lock: {Name:mk413fcda2edd2da77552c9bdc2211a33f344da6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:37:08.137215  711813 cache.go:107] acquiring lock: {Name:mk7baa67397d0c68b56096a5558e51581596a4e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:37:08.137295  711813 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 22:37:08.137308  711813 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 22:37:08.137331  711813 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 22:37:08.137344  711813 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 22:37:08.137353  711813 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 232.919µs
	I1027 22:37:08.137336  711813 cache.go:107] acquiring lock: {Name:mkb0147fb3d8ecd8b50c6fd01f6ae7394f0cd687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:37:08.137373  711813 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 22:37:08.137382  711813 cache.go:107] acquiring lock: {Name:mke2de66fafbe14869d74cc23f68775c4135be46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:37:08.137413  711813 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1027 22:37:08.137454  711813 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1027 22:37:08.137505  711813 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 22:37:08.137543  711813 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 22:37:08.138759  711813 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 22:37:08.138778  711813 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 22:37:08.138820  711813 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1027 22:37:08.138761  711813 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 22:37:08.138761  711813 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1027 22:37:08.138765  711813 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 22:37:08.138758  711813 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 22:37:08.157695  711813 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:37:08.157714  711813 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:37:08.157733  711813 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:37:08.157758  711813 start.go:360] acquireMachinesLock for no-preload-188814: {Name:mkd09e7bc16b18c969a0e9138576a74468fd84c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:37:08.157846  711813 start.go:364] duration metric: took 71.53µs to acquireMachinesLock for "no-preload-188814"
	I1027 22:37:08.157871  711813 start.go:93] Provisioning new machine with config: &{Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:37:08.157964  711813 start.go:125] createHost starting for "" (driver="docker")
	I1027 22:37:08.671305  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1027 22:37:08.671373  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:37:08.671447  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:37:08.717330  682462 cri.go:89] found id: "c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe"
	I1027 22:37:08.717356  682462 cri.go:89] found id: "a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e"
	I1027 22:37:08.717361  682462 cri.go:89] found id: ""
	I1027 22:37:08.717371  682462 logs.go:282] 2 containers: [c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e]
	I1027 22:37:08.717432  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:37:08.722010  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:37:08.726148  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:37:08.726241  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:37:08.763971  682462 cri.go:89] found id: ""
	I1027 22:37:08.764000  682462 logs.go:282] 0 containers: []
	W1027 22:37:08.764010  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:37:08.764025  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:37:08.764083  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:37:08.797838  682462 cri.go:89] found id: ""
	I1027 22:37:08.797867  682462 logs.go:282] 0 containers: []
	W1027 22:37:08.797878  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:37:08.797886  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:37:08.797969  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:37:08.831514  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:37:08.831534  682462 cri.go:89] found id: ""
	I1027 22:37:08.831543  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:37:08.831589  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:37:08.836017  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:37:08.836085  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:37:08.865239  682462 cri.go:89] found id: ""
	I1027 22:37:08.865267  682462 logs.go:282] 0 containers: []
	W1027 22:37:08.865277  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:37:08.865284  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:37:08.865344  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:37:08.897310  682462 cri.go:89] found id: "b58e30f26ed40862207d4e3d2d2635ea64d076fc521ac2a4c7c0385290b3cbbd"
	I1027 22:37:08.897337  682462 cri.go:89] found id: ""
	I1027 22:37:08.897349  682462 logs.go:282] 1 containers: [b58e30f26ed40862207d4e3d2d2635ea64d076fc521ac2a4c7c0385290b3cbbd]
	I1027 22:37:08.897407  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:37:08.901216  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:37:08.901277  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:37:08.929521  682462 cri.go:89] found id: ""
	I1027 22:37:08.929550  682462 logs.go:282] 0 containers: []
	W1027 22:37:08.929562  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:37:08.929573  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:37:08.929640  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:37:08.964260  682462 cri.go:89] found id: ""
	I1027 22:37:08.964285  682462 logs.go:282] 0 containers: []
	W1027 22:37:08.964296  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:37:08.964317  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:37:08.964333  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:37:08.985718  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:37:08.985750  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:37:09.041847  682462 logs.go:123] Gathering logs for kube-controller-manager [b58e30f26ed40862207d4e3d2d2635ea64d076fc521ac2a4c7c0385290b3cbbd] ...
	I1027 22:37:09.041881  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b58e30f26ed40862207d4e3d2d2635ea64d076fc521ac2a4c7c0385290b3cbbd"
	I1027 22:37:09.072738  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:37:09.072768  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:37:09.131054  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:37:09.131082  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:37:09.242357  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:37:09.242393  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:37:06.778841  701190 node_ready.go:57] node "old-k8s-version-908589" has "Ready":"False" status (will retry)
	I1027 22:37:07.278420  701190 node_ready.go:49] node "old-k8s-version-908589" is "Ready"
	I1027 22:37:07.278446  701190 node_ready.go:38] duration metric: took 13.503330382s for node "old-k8s-version-908589" to be "Ready" ...
	I1027 22:37:07.278460  701190 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:37:07.278506  701190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:37:07.291042  701190 api_server.go:72] duration metric: took 13.921636068s to wait for apiserver process to appear ...
	I1027 22:37:07.291063  701190 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:37:07.291085  701190 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 22:37:07.295057  701190 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1027 22:37:07.296083  701190 api_server.go:141] control plane version: v1.28.0
	I1027 22:37:07.296107  701190 api_server.go:131] duration metric: took 5.038849ms to wait for apiserver health ...
	I1027 22:37:07.296115  701190 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:37:07.299482  701190 system_pods.go:59] 8 kube-system pods found
	I1027 22:37:07.299511  701190 system_pods.go:61] "coredns-5dd5756b68-jwp99" [bb1a9fac-9dcc-4267-8887-7d24c3f052c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:37:07.299519  701190 system_pods.go:61] "etcd-old-k8s-version-908589" [cdf5b1cf-cbd9-4b4c-a2fc-515ce5bba622] Running
	I1027 22:37:07.299529  701190 system_pods.go:61] "kindnet-v6dh4" [457d183d-2a92-418b-aecd-5b20e8d58d98] Running
	I1027 22:37:07.299535  701190 system_pods.go:61] "kube-apiserver-old-k8s-version-908589" [1e850070-889d-4676-a95a-284bb61a43a2] Running
	I1027 22:37:07.299545  701190 system_pods.go:61] "kube-controller-manager-old-k8s-version-908589" [d1dfd209-8113-4c2c-861a-af49f7c96bf6] Running
	I1027 22:37:07.299550  701190 system_pods.go:61] "kube-proxy-srms5" [e85ff7a5-d5a3-4eca-b969-465d08c1e022] Running
	I1027 22:37:07.299557  701190 system_pods.go:61] "kube-scheduler-old-k8s-version-908589" [5d2d2b99-9176-4b89-900e-2f3f38a69e83] Running
	I1027 22:37:07.299562  701190 system_pods.go:61] "storage-provisioner" [02cfcc15-9ca3-459d-9151-b34ba21474a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:37:07.299572  701190 system_pods.go:74] duration metric: took 3.451569ms to wait for pod list to return data ...
	I1027 22:37:07.299581  701190 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:37:07.301354  701190 default_sa.go:45] found service account: "default"
	I1027 22:37:07.301370  701190 default_sa.go:55] duration metric: took 1.783827ms for default service account to be created ...
	I1027 22:37:07.301377  701190 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:37:07.305285  701190 system_pods.go:86] 8 kube-system pods found
	I1027 22:37:07.305315  701190 system_pods.go:89] "coredns-5dd5756b68-jwp99" [bb1a9fac-9dcc-4267-8887-7d24c3f052c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:37:07.305322  701190 system_pods.go:89] "etcd-old-k8s-version-908589" [cdf5b1cf-cbd9-4b4c-a2fc-515ce5bba622] Running
	I1027 22:37:07.305331  701190 system_pods.go:89] "kindnet-v6dh4" [457d183d-2a92-418b-aecd-5b20e8d58d98] Running
	I1027 22:37:07.305340  701190 system_pods.go:89] "kube-apiserver-old-k8s-version-908589" [1e850070-889d-4676-a95a-284bb61a43a2] Running
	I1027 22:37:07.305350  701190 system_pods.go:89] "kube-controller-manager-old-k8s-version-908589" [d1dfd209-8113-4c2c-861a-af49f7c96bf6] Running
	I1027 22:37:07.305357  701190 system_pods.go:89] "kube-proxy-srms5" [e85ff7a5-d5a3-4eca-b969-465d08c1e022] Running
	I1027 22:37:07.305366  701190 system_pods.go:89] "kube-scheduler-old-k8s-version-908589" [5d2d2b99-9176-4b89-900e-2f3f38a69e83] Running
	I1027 22:37:07.305374  701190 system_pods.go:89] "storage-provisioner" [02cfcc15-9ca3-459d-9151-b34ba21474a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:37:07.305402  701190 retry.go:31] will retry after 269.973831ms: missing components: kube-dns
	I1027 22:37:07.579933  701190 system_pods.go:86] 8 kube-system pods found
	I1027 22:37:07.579996  701190 system_pods.go:89] "coredns-5dd5756b68-jwp99" [bb1a9fac-9dcc-4267-8887-7d24c3f052c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:37:07.580006  701190 system_pods.go:89] "etcd-old-k8s-version-908589" [cdf5b1cf-cbd9-4b4c-a2fc-515ce5bba622] Running
	I1027 22:37:07.580016  701190 system_pods.go:89] "kindnet-v6dh4" [457d183d-2a92-418b-aecd-5b20e8d58d98] Running
	I1027 22:37:07.580023  701190 system_pods.go:89] "kube-apiserver-old-k8s-version-908589" [1e850070-889d-4676-a95a-284bb61a43a2] Running
	I1027 22:37:07.580030  701190 system_pods.go:89] "kube-controller-manager-old-k8s-version-908589" [d1dfd209-8113-4c2c-861a-af49f7c96bf6] Running
	I1027 22:37:07.580037  701190 system_pods.go:89] "kube-proxy-srms5" [e85ff7a5-d5a3-4eca-b969-465d08c1e022] Running
	I1027 22:37:07.580049  701190 system_pods.go:89] "kube-scheduler-old-k8s-version-908589" [5d2d2b99-9176-4b89-900e-2f3f38a69e83] Running
	I1027 22:37:07.580058  701190 system_pods.go:89] "storage-provisioner" [02cfcc15-9ca3-459d-9151-b34ba21474a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:37:07.580090  701190 retry.go:31] will retry after 300.52184ms: missing components: kube-dns
	I1027 22:37:07.885628  701190 system_pods.go:86] 8 kube-system pods found
	I1027 22:37:07.885663  701190 system_pods.go:89] "coredns-5dd5756b68-jwp99" [bb1a9fac-9dcc-4267-8887-7d24c3f052c9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:37:07.885672  701190 system_pods.go:89] "etcd-old-k8s-version-908589" [cdf5b1cf-cbd9-4b4c-a2fc-515ce5bba622] Running
	I1027 22:37:07.885679  701190 system_pods.go:89] "kindnet-v6dh4" [457d183d-2a92-418b-aecd-5b20e8d58d98] Running
	I1027 22:37:07.885683  701190 system_pods.go:89] "kube-apiserver-old-k8s-version-908589" [1e850070-889d-4676-a95a-284bb61a43a2] Running
	I1027 22:37:07.885687  701190 system_pods.go:89] "kube-controller-manager-old-k8s-version-908589" [d1dfd209-8113-4c2c-861a-af49f7c96bf6] Running
	I1027 22:37:07.885690  701190 system_pods.go:89] "kube-proxy-srms5" [e85ff7a5-d5a3-4eca-b969-465d08c1e022] Running
	I1027 22:37:07.885694  701190 system_pods.go:89] "kube-scheduler-old-k8s-version-908589" [5d2d2b99-9176-4b89-900e-2f3f38a69e83] Running
	I1027 22:37:07.885700  701190 system_pods.go:89] "storage-provisioner" [02cfcc15-9ca3-459d-9151-b34ba21474a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:37:07.885720  701190 retry.go:31] will retry after 354.300582ms: missing components: kube-dns
	I1027 22:37:08.245547  701190 system_pods.go:86] 8 kube-system pods found
	I1027 22:37:08.245583  701190 system_pods.go:89] "coredns-5dd5756b68-jwp99" [bb1a9fac-9dcc-4267-8887-7d24c3f052c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:37:08.245590  701190 system_pods.go:89] "etcd-old-k8s-version-908589" [cdf5b1cf-cbd9-4b4c-a2fc-515ce5bba622] Running
	I1027 22:37:08.245597  701190 system_pods.go:89] "kindnet-v6dh4" [457d183d-2a92-418b-aecd-5b20e8d58d98] Running
	I1027 22:37:08.245603  701190 system_pods.go:89] "kube-apiserver-old-k8s-version-908589" [1e850070-889d-4676-a95a-284bb61a43a2] Running
	I1027 22:37:08.245609  701190 system_pods.go:89] "kube-controller-manager-old-k8s-version-908589" [d1dfd209-8113-4c2c-861a-af49f7c96bf6] Running
	I1027 22:37:08.245619  701190 system_pods.go:89] "kube-proxy-srms5" [e85ff7a5-d5a3-4eca-b969-465d08c1e022] Running
	I1027 22:37:08.245628  701190 system_pods.go:89] "kube-scheduler-old-k8s-version-908589" [5d2d2b99-9176-4b89-900e-2f3f38a69e83] Running
	I1027 22:37:08.245633  701190 system_pods.go:89] "storage-provisioner" [02cfcc15-9ca3-459d-9151-b34ba21474a3] Running
	I1027 22:37:08.245646  701190 system_pods.go:126] duration metric: took 944.262753ms to wait for k8s-apps to be running ...
	I1027 22:37:08.245658  701190 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:37:08.245711  701190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:37:08.258755  701190 system_svc.go:56] duration metric: took 13.09068ms WaitForService to wait for kubelet
	I1027 22:37:08.258780  701190 kubeadm.go:587] duration metric: took 14.889378784s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:37:08.258805  701190 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:37:08.261512  701190 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:37:08.261534  701190 node_conditions.go:123] node cpu capacity is 8
	I1027 22:37:08.261546  701190 node_conditions.go:105] duration metric: took 2.736259ms to run NodePressure ...
	I1027 22:37:08.261558  701190 start.go:242] waiting for startup goroutines ...
	I1027 22:37:08.261565  701190 start.go:247] waiting for cluster config update ...
	I1027 22:37:08.261577  701190 start.go:256] writing updated cluster config ...
	I1027 22:37:08.261812  701190 ssh_runner.go:195] Run: rm -f paused
	I1027 22:37:08.265670  701190 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:37:08.269624  701190 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-jwp99" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 22:37:10.276252  701190 pod_ready.go:104] pod "coredns-5dd5756b68-jwp99" is not "Ready", error: <nil>
	I1027 22:37:08.160895  711813 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:37:08.161153  711813 start.go:159] libmachine.API.Create for "no-preload-188814" (driver="docker")
	I1027 22:37:08.161189  711813 client.go:173] LocalClient.Create starting
	I1027 22:37:08.161266  711813 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 22:37:08.161312  711813 main.go:143] libmachine: Decoding PEM data...
	I1027 22:37:08.161329  711813 main.go:143] libmachine: Parsing certificate...
	I1027 22:37:08.161382  711813 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 22:37:08.161401  711813 main.go:143] libmachine: Decoding PEM data...
	I1027 22:37:08.161409  711813 main.go:143] libmachine: Parsing certificate...
	I1027 22:37:08.161720  711813 cli_runner.go:164] Run: docker network inspect no-preload-188814 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:37:08.179392  711813 cli_runner.go:211] docker network inspect no-preload-188814 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:37:08.179449  711813 network_create.go:284] running [docker network inspect no-preload-188814] to gather additional debugging logs...
	I1027 22:37:08.179464  711813 cli_runner.go:164] Run: docker network inspect no-preload-188814
	W1027 22:37:08.195239  711813 cli_runner.go:211] docker network inspect no-preload-188814 returned with exit code 1
	I1027 22:37:08.195264  711813 network_create.go:287] error running [docker network inspect no-preload-188814]: docker network inspect no-preload-188814: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-188814 not found
	I1027 22:37:08.195278  711813 network_create.go:289] output of [docker network inspect no-preload-188814]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-188814 not found
	
	** /stderr **
	I1027 22:37:08.195364  711813 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:37:08.212106  711813 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
	I1027 22:37:08.213168  711813 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2deffb37428 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:63:99:4f:c9:29} reservation:<nil>}
	I1027 22:37:08.213796  711813 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8aa1ad217c0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:19:7b:f4:de:20} reservation:<nil>}
	I1027 22:37:08.214568  711813 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a2ac9625014b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:fb:26:35:6f:70} reservation:<nil>}
	I1027 22:37:08.215731  711813 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-91d74121f1a1 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:1e:97:c3:11:c8:7a} reservation:<nil>}
	I1027 22:37:08.217274  711813 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bf2f70}
	I1027 22:37:08.217303  711813 network_create.go:124] attempt to create docker network no-preload-188814 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1027 22:37:08.217364  711813 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-188814 no-preload-188814
	I1027 22:37:08.279121  711813 network_create.go:108] docker network no-preload-188814 192.168.94.0/24 created
	I1027 22:37:08.279156  711813 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-188814" container
	I1027 22:37:08.279217  711813 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:37:08.281899  711813 cache.go:162] opening:  /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1027 22:37:08.292620  711813 cache.go:162] opening:  /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1027 22:37:08.296296  711813 cli_runner.go:164] Run: docker volume create no-preload-188814 --label name.minikube.sigs.k8s.io=no-preload-188814 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:37:08.300651  711813 cache.go:162] opening:  /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1027 22:37:08.304670  711813 cache.go:162] opening:  /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1027 22:37:08.311001  711813 cache.go:162] opening:  /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1027 22:37:08.313925  711813 oci.go:103] Successfully created a docker volume no-preload-188814
	I1027 22:37:08.314001  711813 cli_runner.go:164] Run: docker run --rm --name no-preload-188814-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-188814 --entrypoint /usr/bin/test -v no-preload-188814:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:37:08.335870  711813 cache.go:162] opening:  /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1027 22:37:08.345407  711813 cache.go:162] opening:  /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1027 22:37:08.407693  711813 cache.go:157] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1027 22:37:08.407716  711813 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 270.570221ms
	I1027 22:37:08.407728  711813 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 22:37:08.726463  711813 oci.go:107] Successfully prepared a docker volume no-preload-188814
	I1027 22:37:08.726499  711813 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1027 22:37:08.726590  711813 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 22:37:08.726631  711813 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 22:37:08.726676  711813 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:37:08.803403  711813 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-188814 --name no-preload-188814 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-188814 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-188814 --network no-preload-188814 --ip 192.168.94.2 --volume no-preload-188814:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:37:08.805932  711813 cache.go:157] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 22:37:08.805987  711813 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 668.65937ms
	I1027 22:37:08.806000  711813 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 22:37:09.102526  711813 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Running}}
	I1027 22:37:09.123231  711813 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:37:09.142779  711813 cli_runner.go:164] Run: docker exec no-preload-188814 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:37:09.191443  711813 oci.go:144] the created container "no-preload-188814" has a running status.
	I1027 22:37:09.191472  711813 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa...
	I1027 22:37:09.839066  711813 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:37:09.867390  711813 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:37:09.889023  711813 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:37:09.889048  711813 kic_runner.go:114] Args: [docker exec --privileged no-preload-188814 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:37:09.941351  711813 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:37:09.960983  711813 machine.go:94] provisionDockerMachine start ...
	I1027 22:37:09.961066  711813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:37:09.980732  711813 main.go:143] libmachine: Using SSH client type: native
	I1027 22:37:09.981083  711813 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1027 22:37:09.981102  711813 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:37:10.015593  711813 cache.go:157] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 22:37:10.015622  711813 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.878447214s
	I1027 22:37:10.015650  711813 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 22:37:10.136752  711813 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-188814
	
	I1027 22:37:10.136784  711813 ubuntu.go:182] provisioning hostname "no-preload-188814"
	I1027 22:37:10.136862  711813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:37:10.156705  711813 main.go:143] libmachine: Using SSH client type: native
	I1027 22:37:10.157012  711813 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1027 22:37:10.157034  711813 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-188814 && echo "no-preload-188814" | sudo tee /etc/hostname
	I1027 22:37:10.244372  711813 cache.go:157] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 22:37:10.244407  711813 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 2.10728368s
	I1027 22:37:10.244423  711813 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 22:37:10.288257  711813 cache.go:157] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 22:37:10.288281  711813 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 2.150996229s
	I1027 22:37:10.288296  711813 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 22:37:10.325033  711813 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-188814
	
	I1027 22:37:10.325147  711813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:37:10.344439  711813 main.go:143] libmachine: Using SSH client type: native
	I1027 22:37:10.344668  711813 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1027 22:37:10.344684  711813 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-188814' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-188814/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-188814' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:37:10.349572  711813 cache.go:157] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 22:37:10.349596  711813 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.212394901s
	I1027 22:37:10.349610  711813 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 22:37:10.493370  711813 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:37:10.493402  711813 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:37:10.493442  711813 ubuntu.go:190] setting up certificates
	I1027 22:37:10.493461  711813 provision.go:84] configureAuth start
	I1027 22:37:10.493538  711813 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-188814
	I1027 22:37:10.512031  711813 provision.go:143] copyHostCerts
	I1027 22:37:10.512099  711813 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:37:10.512112  711813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:37:10.512197  711813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:37:10.512311  711813 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:37:10.512325  711813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:37:10.512371  711813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:37:10.512463  711813 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:37:10.512473  711813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:37:10.512506  711813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:37:10.512591  711813 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.no-preload-188814 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-188814]
	I1027 22:37:10.887249  711813 provision.go:177] copyRemoteCerts
	I1027 22:37:10.887313  711813 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:37:10.887350  711813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:37:10.898404  711813 cache.go:157] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 22:37:10.898438  711813 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.761234191s
	I1027 22:37:10.898460  711813 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 22:37:10.898482  711813 cache.go:87] Successfully saved all images to host disk.
	I1027 22:37:10.905695  711813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:37:11.008049  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:37:11.028525  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:37:11.047492  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:37:11.065590  711813 provision.go:87] duration metric: took 572.112552ms to configureAuth
	I1027 22:37:11.065618  711813 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:37:11.065784  711813 config.go:182] Loaded profile config "no-preload-188814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:37:11.065883  711813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:37:11.083470  711813 main.go:143] libmachine: Using SSH client type: native
	I1027 22:37:11.083712  711813 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1027 22:37:11.083734  711813 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:37:11.345680  711813 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:37:11.345706  711813 machine.go:97] duration metric: took 1.384704611s to provisionDockerMachine
	I1027 22:37:11.345719  711813 client.go:176] duration metric: took 3.18451847s to LocalClient.Create
	I1027 22:37:11.345739  711813 start.go:167] duration metric: took 3.184586862s to libmachine.API.Create "no-preload-188814"
	I1027 22:37:11.345748  711813 start.go:293] postStartSetup for "no-preload-188814" (driver="docker")
	I1027 22:37:11.345762  711813 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:37:11.345835  711813 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:37:11.345890  711813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:37:11.364339  711813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:37:11.467803  711813 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:37:11.471837  711813 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:37:11.471870  711813 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:37:11.471884  711813 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:37:11.471962  711813 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:37:11.472074  711813 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:37:11.472212  711813 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:37:11.480835  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:37:11.503053  711813 start.go:296] duration metric: took 157.276261ms for postStartSetup
	I1027 22:37:11.503456  711813 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-188814
	I1027 22:37:11.521766  711813 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/config.json ...
	I1027 22:37:11.522061  711813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:37:11.522106  711813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:37:11.539991  711813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:37:11.638737  711813 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:37:11.643747  711813 start.go:128] duration metric: took 3.485766052s to createHost
	I1027 22:37:11.643775  711813 start.go:83] releasing machines lock for "no-preload-188814", held for 3.485916452s
	I1027 22:37:11.643850  711813 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-188814
	I1027 22:37:11.661188  711813 ssh_runner.go:195] Run: cat /version.json
	I1027 22:37:11.661238  711813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:37:11.661285  711813 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:37:11.661357  711813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:37:11.679444  711813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:37:11.679777  711813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:37:11.831593  711813 ssh_runner.go:195] Run: systemctl --version
	I1027 22:37:11.838713  711813 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:37:11.874377  711813 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:37:11.879389  711813 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:37:11.879463  711813 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:37:11.906568  711813 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:37:11.906591  711813 start.go:496] detecting cgroup driver to use...
	I1027 22:37:11.906635  711813 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:37:11.906688  711813 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:37:11.923962  711813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:37:11.937797  711813 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:37:11.937870  711813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:37:11.955772  711813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:37:11.975820  711813 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:37:12.078232  711813 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:37:12.180009  711813 docker.go:234] disabling docker service ...
	I1027 22:37:12.180080  711813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:37:12.201246  711813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:37:12.215386  711813 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:37:12.304107  711813 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:37:12.393722  711813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:37:12.407739  711813 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:37:12.423708  711813 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:37:12.423781  711813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:37:12.435505  711813 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:37:12.435566  711813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:37:12.445730  711813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:37:12.455869  711813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:37:12.466123  711813 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:37:12.474936  711813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:37:12.484299  711813 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:37:12.498961  711813 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:37:12.508423  711813 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:37:12.517029  711813 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:37:12.525516  711813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:37:12.613427  711813 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:37:12.965705  711813 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:37:12.965780  711813 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:37:12.970430  711813 start.go:564] Will wait 60s for crictl version
	I1027 22:37:12.970500  711813 ssh_runner.go:195] Run: which crictl
	I1027 22:37:12.974412  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:37:12.999333  711813 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:37:12.999426  711813 ssh_runner.go:195] Run: crio --version
	I1027 22:37:13.035912  711813 ssh_runner.go:195] Run: crio --version
	I1027 22:37:13.067516  711813 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 22:37:12.775299  701190 pod_ready.go:104] pod "coredns-5dd5756b68-jwp99" is not "Ready", error: <nil>
	W1027 22:37:14.776200  701190 pod_ready.go:104] pod "coredns-5dd5756b68-jwp99" is not "Ready", error: <nil>
	I1027 22:37:13.068551  711813 cli_runner.go:164] Run: docker network inspect no-preload-188814 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:37:13.086144  711813 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 22:37:13.090551  711813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:37:13.101001  711813 kubeadm.go:884] updating cluster {Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:37:13.101103  711813 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:37:13.101141  711813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:37:13.126227  711813 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 22:37:13.126254  711813 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1027 22:37:13.126299  711813 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:37:13.126327  711813 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 22:37:13.126359  711813 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 22:37:13.126367  711813 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 22:37:13.126410  711813 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 22:37:13.126416  711813 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1027 22:37:13.126324  711813 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 22:37:13.126575  711813 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1027 22:37:13.127692  711813 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 22:37:13.127704  711813 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 22:37:13.127713  711813 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 22:37:13.127722  711813 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1027 22:37:13.127718  711813 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:37:13.127785  711813 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 22:37:13.127789  711813 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 22:37:13.127863  711813 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1027 22:37:13.239739  711813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1027 22:37:13.268211  711813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 22:37:13.272892  711813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1027 22:37:13.273142  711813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1027 22:37:13.276846  711813 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1027 22:37:13.276892  711813 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 22:37:13.276940  711813 ssh_runner.go:195] Run: which crictl
	I1027 22:37:13.282365  711813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1027 22:37:13.288180  711813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1027 22:37:13.297931  711813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1027 22:37:13.315797  711813 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1027 22:37:13.315850  711813 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 22:37:13.315902  711813 ssh_runner.go:195] Run: which crictl
	I1027 22:37:13.323069  711813 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1027 22:37:13.323118  711813 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1027 22:37:13.323138  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 22:37:13.323162  711813 ssh_runner.go:195] Run: which crictl
	I1027 22:37:13.323245  711813 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1027 22:37:13.323276  711813 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 22:37:13.323311  711813 ssh_runner.go:195] Run: which crictl
	I1027 22:37:13.335344  711813 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1027 22:37:13.335392  711813 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 22:37:13.335441  711813 ssh_runner.go:195] Run: which crictl
	I1027 22:37:13.341647  711813 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1027 22:37:13.341695  711813 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 22:37:13.341744  711813 ssh_runner.go:195] Run: which crictl
	I1027 22:37:13.345593  711813 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1027 22:37:13.345645  711813 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1027 22:37:13.345686  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 22:37:13.345710  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 22:37:13.345691  711813 ssh_runner.go:195] Run: which crictl
	I1027 22:37:13.354516  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 22:37:13.354570  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 22:37:13.354515  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 22:37:13.354626  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 22:37:13.382838  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 22:37:13.382953  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 22:37:13.383031  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 22:37:13.394740  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 22:37:13.394882  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 22:37:13.395054  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 22:37:13.395161  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 22:37:13.422012  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 22:37:13.422181  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 22:37:13.430261  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 22:37:13.432763  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 22:37:13.435413  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 22:37:13.440251  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 22:37:13.440514  711813 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1027 22:37:13.440613  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 22:37:13.461126  711813 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1027 22:37:13.461246  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 22:37:13.461346  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 22:37:13.467106  711813 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1027 22:37:13.467221  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1027 22:37:13.470194  711813 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1027 22:37:13.470296  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 22:37:13.474049  711813 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1027 22:37:13.474145  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 22:37:13.474450  711813 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1027 22:37:13.474518  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1027 22:37:13.474580  711813 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1027 22:37:13.474596  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1027 22:37:13.474649  711813 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1027 22:37:13.474664  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1027 22:37:13.503800  711813 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1027 22:37:13.503826  711813 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1027 22:37:13.503853  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1027 22:37:13.503856  711813 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1027 22:37:13.503895  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1027 22:37:13.503906  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1027 22:37:13.503878  711813 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1027 22:37:13.503830  711813 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1027 22:37:13.503939  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1027 22:37:13.503977  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1027 22:37:13.650912  711813 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1027 22:37:13.650974  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1027 22:37:13.708554  711813 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1027 22:37:13.708618  711813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1027 22:37:14.153491  711813 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1027 22:37:14.153529  711813 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 22:37:14.153583  711813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 22:37:14.498780  711813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:37:15.216223  711813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.062611787s)
	I1027 22:37:15.216252  711813 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1027 22:37:15.216289  711813 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1027 22:37:15.216301  711813 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1027 22:37:15.216331  711813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1027 22:37:15.216347  711813 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:37:15.216395  711813 ssh_runner.go:195] Run: which crictl
	I1027 22:37:15.220692  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:37:16.460207  711813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.243843545s)
	I1027 22:37:16.460235  711813 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1027 22:37:16.460253  711813 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 22:37:16.460264  711813 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.239538738s)
	I1027 22:37:16.460298  711813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 22:37:16.460334  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:37:16.485851  711813 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:37:17.649839  711813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.189512169s)
	I1027 22:37:17.649874  711813 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1027 22:37:17.649872  711813 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.163986005s)
	I1027 22:37:17.649905  711813 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 22:37:17.649917  711813 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1027 22:37:17.649980  711813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 22:37:17.650031  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1027 22:37:17.275457  701190 pod_ready.go:94] pod "coredns-5dd5756b68-jwp99" is "Ready"
	I1027 22:37:17.275484  701190 pod_ready.go:86] duration metric: took 9.005833472s for pod "coredns-5dd5756b68-jwp99" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:17.278463  701190 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:17.282720  701190 pod_ready.go:94] pod "etcd-old-k8s-version-908589" is "Ready"
	I1027 22:37:17.282743  701190 pod_ready.go:86] duration metric: took 4.253867ms for pod "etcd-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:17.285552  701190 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:17.289465  701190 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-908589" is "Ready"
	I1027 22:37:17.289487  701190 pod_ready.go:86] duration metric: took 3.914423ms for pod "kube-apiserver-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:17.292081  701190 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:17.473050  701190 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-908589" is "Ready"
	I1027 22:37:17.473077  701190 pod_ready.go:86] duration metric: took 180.975412ms for pod "kube-controller-manager-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:17.674039  701190 pod_ready.go:83] waiting for pod "kube-proxy-srms5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:18.073408  701190 pod_ready.go:94] pod "kube-proxy-srms5" is "Ready"
	I1027 22:37:18.073439  701190 pod_ready.go:86] duration metric: took 399.374809ms for pod "kube-proxy-srms5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:18.274318  701190 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:18.673854  701190 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-908589" is "Ready"
	I1027 22:37:18.673886  701190 pod_ready.go:86] duration metric: took 399.539457ms for pod "kube-scheduler-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:37:18.673901  701190 pod_ready.go:40] duration metric: took 10.408200924s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:37:18.719558  701190 start.go:626] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1027 22:37:18.724364  701190 out.go:203] 
	W1027 22:37:18.725494  701190 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1027 22:37:18.726560  701190 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1027 22:37:18.727925  701190 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-908589" cluster and "default" namespace by default
	I1027 22:37:19.029377  711813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.379368823s)
	I1027 22:37:19.029405  711813 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1027 22:37:19.029420  711813 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.379361297s)
	I1027 22:37:19.029442  711813 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 22:37:19.029467  711813 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1027 22:37:19.029504  711813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 22:37:19.029501  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1027 22:37:20.147669  711813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.118138377s)
	I1027 22:37:20.147694  711813 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1027 22:37:20.147717  711813 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1027 22:37:20.147752  711813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1027 22:37:19.314229  682462 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.07180885s)
	W1027 22:37:19.314277  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1027 22:37:19.314288  682462 logs.go:123] Gathering logs for kube-apiserver [c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe] ...
	I1027 22:37:19.314302  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe"
	I1027 22:37:19.351624  682462 logs.go:123] Gathering logs for kube-apiserver [a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e] ...
	I1027 22:37:19.351655  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e"
	I1027 22:37:19.386062  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:37:19.386089  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:37:21.919333  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:37:23.190873  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:56458->192.168.76.2:8443: read: connection reset by peer
	I1027 22:37:23.190963  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:37:23.191034  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:37:23.222323  682462 cri.go:89] found id: "c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe"
	I1027 22:37:23.222352  682462 cri.go:89] found id: "a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e"
	I1027 22:37:23.222359  682462 cri.go:89] found id: ""
	I1027 22:37:23.222381  682462 logs.go:282] 2 containers: [c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e]
	I1027 22:37:23.222448  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:37:23.227722  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:37:23.232297  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:37:23.232366  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:37:23.266429  682462 cri.go:89] found id: ""
	I1027 22:37:23.266458  682462 logs.go:282] 0 containers: []
	W1027 22:37:23.266470  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:37:23.266477  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:37:23.266525  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:37:23.294112  682462 cri.go:89] found id: ""
	I1027 22:37:23.294135  682462 logs.go:282] 0 containers: []
	W1027 22:37:23.294151  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:37:23.294166  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:37:23.294225  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:37:23.321545  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:37:23.321564  682462 cri.go:89] found id: ""
	I1027 22:37:23.321573  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:37:23.321621  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:37:23.325573  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:37:23.325626  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:37:23.355366  682462 cri.go:89] found id: ""
	I1027 22:37:23.355394  682462 logs.go:282] 0 containers: []
	W1027 22:37:23.355405  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:37:23.355413  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:37:23.355480  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:37:23.381892  682462 cri.go:89] found id: "6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:37:23.381912  682462 cri.go:89] found id: "b58e30f26ed40862207d4e3d2d2635ea64d076fc521ac2a4c7c0385290b3cbbd"
	I1027 22:37:23.381915  682462 cri.go:89] found id: ""
	I1027 22:37:23.381922  682462 logs.go:282] 2 containers: [6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e b58e30f26ed40862207d4e3d2d2635ea64d076fc521ac2a4c7c0385290b3cbbd]
	I1027 22:37:23.382009  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:37:23.386018  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:37:23.389613  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:37:23.389668  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:37:23.418116  682462 cri.go:89] found id: ""
	I1027 22:37:23.418154  682462 logs.go:282] 0 containers: []
	W1027 22:37:23.418167  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:37:23.418176  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:37:23.418242  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:37:23.448607  682462 cri.go:89] found id: ""
	I1027 22:37:23.448632  682462 logs.go:282] 0 containers: []
	W1027 22:37:23.448639  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:37:23.448683  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:37:23.448698  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:37:23.532242  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:37:23.532279  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:37:23.551046  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:37:23.551078  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:37:23.609803  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:37:23.609826  682462 logs.go:123] Gathering logs for kube-apiserver [c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe] ...
	I1027 22:37:23.609847  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe"
	I1027 22:37:23.644868  682462 logs.go:123] Gathering logs for kube-apiserver [a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e] ...
	I1027 22:37:23.644903  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e"
	W1027 22:37:23.673188  682462 logs.go:130] failed kube-apiserver [a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e": Process exited with status 1
	stdout:
	
	stderr:
	E1027 22:37:23.670289    3444 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e\": container with ID starting with a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e not found: ID does not exist" containerID="a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e"
	time="2025-10-27T22:37:23Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e\": container with ID starting with a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e not found: ID does not exist"
	 output: 
	** stderr ** 
	E1027 22:37:23.670289    3444 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e\": container with ID starting with a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e not found: ID does not exist" containerID="a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e"
	time="2025-10-27T22:37:23Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e\": container with ID starting with a1ae5e5aad709a1c5cb74af0607901c21e2d4adbce3131445d19750ef47f5f8e not found: ID does not exist"
	
	** /stderr **
	I1027 22:37:23.673223  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:37:23.673242  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:37:23.728211  682462 logs.go:123] Gathering logs for kube-controller-manager [6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e] ...
	I1027 22:37:23.728254  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:37:23.758321  682462 logs.go:123] Gathering logs for kube-controller-manager [b58e30f26ed40862207d4e3d2d2635ea64d076fc521ac2a4c7c0385290b3cbbd] ...
	I1027 22:37:23.758350  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b58e30f26ed40862207d4e3d2d2635ea64d076fc521ac2a4c7c0385290b3cbbd"
	I1027 22:37:23.789536  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:37:23.789569  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:37:23.842466  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:37:23.842505  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:37:23.714645  711813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.566865452s)
	I1027 22:37:23.714678  711813 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1027 22:37:23.714708  711813 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1027 22:37:23.714762  711813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1027 22:37:24.260378  711813 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1027 22:37:24.260434  711813 cache_images.go:125] Successfully loaded all cached images
	I1027 22:37:24.260442  711813 cache_images.go:94] duration metric: took 11.13417393s to LoadCachedImages
	I1027 22:37:24.260459  711813 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 22:37:24.260571  711813 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-188814 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:37:24.260664  711813 ssh_runner.go:195] Run: crio config
	I1027 22:37:24.307608  711813 cni.go:84] Creating CNI manager for ""
	I1027 22:37:24.307630  711813 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:37:24.307650  711813 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:37:24.307683  711813 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-188814 NodeName:no-preload-188814 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:37:24.307801  711813 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-188814"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:37:24.307868  711813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:37:24.316608  711813 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1027 22:37:24.316658  711813 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1027 22:37:24.324836  711813 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1027 22:37:24.324856  711813 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1027 22:37:24.324917  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1027 22:37:24.324954  711813 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1027 22:37:24.328932  711813 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1027 22:37:24.328973  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1027 22:37:25.659197  711813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:37:25.673716  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1027 22:37:25.677646  711813 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1027 22:37:25.677675  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1027 22:37:26.048229  711813 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1027 22:37:26.052710  711813 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1027 22:37:26.052736  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1027 22:37:26.209749  711813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:37:26.217790  711813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 22:37:26.230008  711813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:37:26.243800  711813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1027 22:37:26.255993  711813 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:37:26.259406  711813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:37:26.269796  711813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:37:26.350630  711813 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:37:26.372105  711813 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814 for IP: 192.168.94.2
	I1027 22:37:26.372125  711813 certs.go:195] generating shared ca certs ...
	I1027 22:37:26.372147  711813 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:37:26.372306  711813 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:37:26.372368  711813 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:37:26.372380  711813 certs.go:257] generating profile certs ...
	I1027 22:37:26.372455  711813 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.key
	I1027 22:37:26.372471  711813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt with IP's: []
	I1027 22:37:26.542232  711813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt ...
	I1027 22:37:26.542266  711813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt: {Name:mkb8fb04cd346be4d635eef113035efb0553137e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:37:26.542454  711813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.key ...
	I1027 22:37:26.542470  711813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.key: {Name:mkc0d799f3075bc37fc1970fa6b24661af996e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:37:26.542569  711813 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.key.c506b838
	I1027 22:37:26.542583  711813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.crt.c506b838 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1027 22:37:27.119457  711813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.crt.c506b838 ...
	I1027 22:37:27.119486  711813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.crt.c506b838: {Name:mk240b4513057c4d5579fdf49c31cd1810a4f7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:37:27.119691  711813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.key.c506b838 ...
	I1027 22:37:27.119710  711813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.key.c506b838: {Name:mk29ad1d6db82f7873eeaa95d5c3f58a5aec920f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:37:27.119824  711813 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.crt.c506b838 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.crt
	I1027 22:37:27.119933  711813 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.key.c506b838 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.key
	I1027 22:37:27.120029  711813 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.key
	I1027 22:37:27.120071  711813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.crt with IP's: []
	I1027 22:37:27.378839  711813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.crt ...
	I1027 22:37:27.378864  711813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.crt: {Name:mkc5a01897469d89cd99bc009a947c6969471382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:37:27.379066  711813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.key ...
	I1027 22:37:27.379085  711813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.key: {Name:mk7d8c649ea397db189492bf59f088f0f04f14db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:37:27.379312  711813 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:37:27.379359  711813 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:37:27.379375  711813 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:37:27.379412  711813 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:37:27.379443  711813 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:37:27.379496  711813 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:37:27.379550  711813 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:37:27.380199  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:37:27.398325  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:37:27.417071  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:37:27.434451  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:37:27.451763  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 22:37:27.468486  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:37:27.484906  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:37:27.501626  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:37:27.518702  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:37:27.537782  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:37:27.555010  711813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:37:27.571497  711813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:37:27.583558  711813 ssh_runner.go:195] Run: openssl version
	I1027 22:37:27.589667  711813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:37:27.597570  711813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:37:27.600993  711813 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:37:27.601049  711813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:37:27.635147  711813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:37:27.644209  711813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:37:27.652439  711813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:37:27.656101  711813 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:37:27.656141  711813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:37:27.690420  711813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:37:27.699266  711813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:37:27.707647  711813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:37:27.711662  711813 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:37:27.711705  711813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:37:27.747143  711813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:37:27.755990  711813 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:37:27.759522  711813 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:37:27.759591  711813 kubeadm.go:401] StartCluster: {Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:37:27.759672  711813 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:37:27.759730  711813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:37:27.786615  711813 cri.go:89] found id: ""
	I1027 22:37:27.786699  711813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:37:27.794658  711813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:37:27.802339  711813 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:37:27.802392  711813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:37:27.809783  711813 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:37:27.809804  711813 kubeadm.go:158] found existing configuration files:
	
	I1027 22:37:27.809846  711813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:37:27.817335  711813 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:37:27.817375  711813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:37:27.824360  711813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:37:27.831735  711813 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:37:27.831782  711813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:37:27.838885  711813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:37:27.846111  711813 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:37:27.846155  711813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:37:27.853235  711813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:37:27.860466  711813 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:37:27.860504  711813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:37:27.867366  711813 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:37:27.902405  711813 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:37:27.902509  711813 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:37:27.952306  711813 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:37:27.952397  711813 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:37:27.952445  711813 kubeadm.go:319] OS: Linux
	I1027 22:37:27.952513  711813 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:37:27.952588  711813 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:37:27.952696  711813 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:37:27.952783  711813 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:37:27.952889  711813 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:37:27.953014  711813 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:37:27.953100  711813 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:37:27.953174  711813 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:37:28.017006  711813 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:37:28.017189  711813 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:37:28.017327  711813 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:37:28.031427  711813 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 27 22:37:07 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:07.243771806Z" level=info msg="Starting container: 807bb8e1e9a3aeb402be3d4e019ccf33904eb64695da401c07ae557c7095ed7e" id=7c30cc8d-e8aa-466d-b890-bdae3e3c7ff1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:37:07 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:07.245404505Z" level=info msg="Started container" PID=2173 containerID=807bb8e1e9a3aeb402be3d4e019ccf33904eb64695da401c07ae557c7095ed7e description=kube-system/coredns-5dd5756b68-jwp99/coredns id=7c30cc8d-e8aa-466d-b890-bdae3e3c7ff1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bea7863e08be614250808d3a3b7797a530a4106e6bbc2d2807019ad4cf3d8f1
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.191932499Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2f9fd8b5-4b2a-442b-947b-34790003b716 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.19210804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.215310117Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fafc9ef63f68e9c8b197d0549635e968c322c670e5546fcdab70b82c6042b2b4 UID:903d9a95-da5b-48dd-9672-2c3ef418e1a8 NetNS:/var/run/netns/1edd6c48-5fc6-4933-b909-12e3d12aa911 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000694610}] Aliases:map[]}"
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.215341153Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.227267144Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fafc9ef63f68e9c8b197d0549635e968c322c670e5546fcdab70b82c6042b2b4 UID:903d9a95-da5b-48dd-9672-2c3ef418e1a8 NetNS:/var/run/netns/1edd6c48-5fc6-4933-b909-12e3d12aa911 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000694610}] Aliases:map[]}"
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.22744796Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.228428036Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.229778441Z" level=info msg="Ran pod sandbox fafc9ef63f68e9c8b197d0549635e968c322c670e5546fcdab70b82c6042b2b4 with infra container: default/busybox/POD" id=2f9fd8b5-4b2a-442b-947b-34790003b716 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.231085894Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c93bbed0-0bbf-4eb9-9184-1d08e3ce39e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.231218628Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c93bbed0-0bbf-4eb9-9184-1d08e3ce39e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.231253694Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c93bbed0-0bbf-4eb9-9184-1d08e3ce39e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.231798153Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e28a7ad3-be89-427a-8628-304fb4751ad2 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:37:19 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:19.233255663Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 22:37:21 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:21.349326087Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=e28a7ad3-be89-427a-8628-304fb4751ad2 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:37:21 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:21.350263971Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0072740-bedf-4280-8ed2-87c296db348a name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:37:21 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:21.351690684Z" level=info msg="Creating container: default/busybox/busybox" id=6be85ddd-2c26-4f7a-87d6-13b9249464b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:37:21 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:21.351821581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:37:21 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:21.355424484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:37:21 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:21.355840796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:37:21 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:21.381122019Z" level=info msg="Created container 12307696a31247e28cba9a195758fdba2c430db6055eb76751a52e1790af00a7: default/busybox/busybox" id=6be85ddd-2c26-4f7a-87d6-13b9249464b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:37:21 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:21.381731106Z" level=info msg="Starting container: 12307696a31247e28cba9a195758fdba2c430db6055eb76751a52e1790af00a7" id=1da1c7f3-7344-4275-9650-30306cb5a2a9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:37:21 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:21.383393924Z" level=info msg="Started container" PID=2256 containerID=12307696a31247e28cba9a195758fdba2c430db6055eb76751a52e1790af00a7 description=default/busybox/busybox id=1da1c7f3-7344-4275-9650-30306cb5a2a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fafc9ef63f68e9c8b197d0549635e968c322c670e5546fcdab70b82c6042b2b4
	Oct 27 22:37:27 old-k8s-version-908589 crio[777]: time="2025-10-27T22:37:27.977234132Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	12307696a3124       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   fafc9ef63f68e       busybox                                          default
	807bb8e1e9a3a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      22 seconds ago      Running             coredns                   0                   9bea7863e08be       coredns-5dd5756b68-jwp99                         kube-system
	1e797000c7324       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 seconds ago      Running             storage-provisioner       0                   340a0b3dc14e1       storage-provisioner                              kube-system
	37744dad3fb12       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    33 seconds ago      Running             kindnet-cni               0                   aa2574f43c24a       kindnet-v6dh4                                    kube-system
	90d28ea2cd765       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      35 seconds ago      Running             kube-proxy                0                   098de749bbce9       kube-proxy-srms5                                 kube-system
	44612a305fb27       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      53 seconds ago      Running             etcd                      0                   3e91c054470aa       etcd-old-k8s-version-908589                      kube-system
	009e25fc7d88a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      53 seconds ago      Running             kube-apiserver            0                   7b6588947ca23       kube-apiserver-old-k8s-version-908589            kube-system
	25ed2ab21e652       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      53 seconds ago      Running             kube-controller-manager   0                   94866842c3301       kube-controller-manager-old-k8s-version-908589   kube-system
	5e28e600fb17d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      53 seconds ago      Running             kube-scheduler            0                   a79cc9574a001       kube-scheduler-old-k8s-version-908589            kube-system
	
	
	==> coredns [807bb8e1e9a3aeb402be3d4e019ccf33904eb64695da401c07ae557c7095ed7e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52145 - 10858 "HINFO IN 4203796625853671634.1434957838604726169. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026284805s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-908589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-908589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=old-k8s-version-908589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_36_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-908589
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:37:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:37:11 +0000   Mon, 27 Oct 2025 22:36:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:37:11 +0000   Mon, 27 Oct 2025 22:36:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:37:11 +0000   Mon, 27 Oct 2025 22:36:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:37:11 +0000   Mon, 27 Oct 2025 22:37:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-908589
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                150d46e9-6742-4ab0-adb7-789e26ecfc2c
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-jwp99                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     36s
	  kube-system                 etcd-old-k8s-version-908589                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         51s
	  kube-system                 kindnet-v6dh4                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      36s
	  kube-system                 kube-apiserver-old-k8s-version-908589             250m (3%)     0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-controller-manager-old-k8s-version-908589    200m (2%)     0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-proxy-srms5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-scheduler-old-k8s-version-908589             100m (1%)     0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 35s   kube-proxy       
	  Normal  Starting                 49s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s   kubelet          Node old-k8s-version-908589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s   kubelet          Node old-k8s-version-908589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s   kubelet          Node old-k8s-version-908589 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s   node-controller  Node old-k8s-version-908589 event: Registered Node old-k8s-version-908589 in Controller
	  Normal  NodeReady                23s   kubelet          Node old-k8s-version-908589 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [44612a305fb279b54958d6bfc879d09e18acd2766db14d892b6d529375c90b42] <==
	{"level":"info","ts":"2025-10-27T22:36:36.150772Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T22:36:36.150785Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T22:36:36.151745Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T22:36:36.151836Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-27T22:36:36.151915Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-27T22:36:36.152064Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T22:36:36.152135Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T22:36:36.642654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-27T22:36:36.642697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-27T22:36:36.642725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-10-27T22:36:36.64274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-10-27T22:36:36.642745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-27T22:36:36.642753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-10-27T22:36:36.64276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-27T22:36:36.643667Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-908589 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T22:36:36.643677Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T22:36:36.643681Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T22:36:36.64371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T22:36:36.643874Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T22:36:36.643895Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-27T22:36:36.64423Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T22:36:36.644321Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T22:36:36.644354Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T22:36:36.644968Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-27T22:36:36.64511Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:37:29 up  2:19,  0 user,  load average: 3.00, 2.38, 2.67
	Linux old-k8s-version-908589 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [37744dad3fb1254e6df75174bed401f16e6cf6107c50262ac7ab562f2781b15f] <==
	I1027 22:36:56.374558       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:36:56.374830       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1027 22:36:56.374983       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:36:56.375000       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:36:56.375017       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:36:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:36:56.579704       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:36:56.579748       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:36:56.579761       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:36:56.579868       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:36:56.969377       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:36:56.969436       1 metrics.go:72] Registering metrics
	I1027 22:36:56.969785       1 controller.go:711] "Syncing nftables rules"
	I1027 22:37:06.587066       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:37:06.587123       1 main.go:301] handling current node
	I1027 22:37:16.578889       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:37:16.578978       1 main.go:301] handling current node
	I1027 22:37:26.580505       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:37:26.580553       1 main.go:301] handling current node
	
	
	==> kube-apiserver [009e25fc7d88ad6ab8b58eff8789de5ec2f7c0eb731dec57760e75a3f1d18d55] <==
	I1027 22:36:37.786842       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1027 22:36:37.787459       1 shared_informer.go:318] Caches are synced for configmaps
	I1027 22:36:37.787771       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1027 22:36:37.787799       1 aggregator.go:166] initial CRD sync complete...
	I1027 22:36:37.787809       1 autoregister_controller.go:141] Starting autoregister controller
	I1027 22:36:37.787815       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:36:37.787825       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:36:37.787895       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1027 22:36:37.806467       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1027 22:36:37.807094       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:36:38.691969       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 22:36:38.695598       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 22:36:38.695620       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:36:39.103385       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:36:39.136355       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:36:39.196624       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 22:36:39.202690       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1027 22:36:39.203724       1 controller.go:624] quota admission added evaluator for: endpoints
	I1027 22:36:39.207598       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:36:39.734866       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 22:36:40.863505       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 22:36:40.874300       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 22:36:40.884582       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1027 22:36:53.243652       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1027 22:36:53.493579       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [25ed2ab21e6529d89059358627af0052ae55d4457eb9ab6feab396a0469d56ba] <==
	I1027 22:36:52.787319       1 shared_informer.go:318] Caches are synced for attach detach
	I1027 22:36:52.787405       1 shared_informer.go:318] Caches are synced for ephemeral
	I1027 22:36:52.796118       1 shared_informer.go:318] Caches are synced for resource quota
	I1027 22:36:53.115702       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 22:36:53.160065       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 22:36:53.160102       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 22:36:53.252392       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-v6dh4"
	I1027 22:36:53.253447       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-srms5"
	I1027 22:36:53.496659       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1027 22:36:53.606563       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-s46xv"
	I1027 22:36:53.616435       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jwp99"
	I1027 22:36:53.625668       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.032578ms"
	I1027 22:36:53.633109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.286894ms"
	I1027 22:36:53.633616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="308.37µs"
	I1027 22:36:53.802104       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1027 22:36:53.815753       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-s46xv"
	I1027 22:36:53.822267       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.243009ms"
	I1027 22:36:53.828509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.1894ms"
	I1027 22:36:53.828618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.207µs"
	I1027 22:37:06.901736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.095µs"
	I1027 22:37:06.916807       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.668µs"
	I1027 22:37:07.542067       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1027 22:37:08.015739       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.678µs"
	I1027 22:37:17.225913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.324941ms"
	I1027 22:37:17.226053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.44µs"
	
	
	==> kube-proxy [90d28ea2cd765916b9481849cca789f8a42fd940e84f492fb80c7641d5bf5e96] <==
	I1027 22:36:53.697867       1 server_others.go:69] "Using iptables proxy"
	I1027 22:36:53.712192       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1027 22:36:53.733701       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:36:53.736212       1 server_others.go:152] "Using iptables Proxier"
	I1027 22:36:53.736251       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 22:36:53.736259       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 22:36:53.736312       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 22:36:53.736665       1 server.go:846] "Version info" version="v1.28.0"
	I1027 22:36:53.736689       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:36:53.737490       1 config.go:188] "Starting service config controller"
	I1027 22:36:53.737533       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 22:36:53.737502       1 config.go:97] "Starting endpoint slice config controller"
	I1027 22:36:53.739155       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 22:36:53.739230       1 config.go:315] "Starting node config controller"
	I1027 22:36:53.739667       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 22:36:53.838526       1 shared_informer.go:318] Caches are synced for service config
	I1027 22:36:53.842125       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 22:36:53.842169       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5e28e600fb17d2750e306823028bda319319a97e4ceca7cc3749fc7ef0315315] <==
	W1027 22:36:37.744110       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1027 22:36:37.744191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1027 22:36:37.744765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1027 22:36:37.744788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1027 22:36:37.745234       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1027 22:36:37.745259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1027 22:36:37.745263       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1027 22:36:37.745277       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1027 22:36:38.623697       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1027 22:36:38.623730       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1027 22:36:38.686071       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1027 22:36:38.686112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1027 22:36:38.745318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1027 22:36:38.745349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1027 22:36:38.815793       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1027 22:36:38.815830       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1027 22:36:38.864685       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1027 22:36:38.864721       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1027 22:36:38.867010       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1027 22:36:38.867054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1027 22:36:38.979893       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1027 22:36:38.979937       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1027 22:36:39.045831       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1027 22:36:39.045957       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1027 22:36:41.241119       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 22:36:52 old-k8s-version-908589 kubelet[1393]: I1027 22:36:52.681553    1393 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 22:36:53 old-k8s-version-908589 kubelet[1393]: I1027 22:36:53.258821    1393 topology_manager.go:215] "Topology Admit Handler" podUID="457d183d-2a92-418b-aecd-5b20e8d58d98" podNamespace="kube-system" podName="kindnet-v6dh4"
	Oct 27 22:36:53 old-k8s-version-908589 kubelet[1393]: I1027 22:36:53.260265    1393 topology_manager.go:215] "Topology Admit Handler" podUID="e85ff7a5-d5a3-4eca-b969-465d08c1e022" podNamespace="kube-system" podName="kube-proxy-srms5"
	Oct 27 22:36:53 old-k8s-version-908589 kubelet[1393]: I1027 22:36:53.305002    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/457d183d-2a92-418b-aecd-5b20e8d58d98-xtables-lock\") pod \"kindnet-v6dh4\" (UID: \"457d183d-2a92-418b-aecd-5b20e8d58d98\") " pod="kube-system/kindnet-v6dh4"
	Oct 27 22:36:53 old-k8s-version-908589 kubelet[1393]: I1027 22:36:53.305118    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/457d183d-2a92-418b-aecd-5b20e8d58d98-cni-cfg\") pod \"kindnet-v6dh4\" (UID: \"457d183d-2a92-418b-aecd-5b20e8d58d98\") " pod="kube-system/kindnet-v6dh4"
	Oct 27 22:36:53 old-k8s-version-908589 kubelet[1393]: I1027 22:36:53.305154    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/457d183d-2a92-418b-aecd-5b20e8d58d98-lib-modules\") pod \"kindnet-v6dh4\" (UID: \"457d183d-2a92-418b-aecd-5b20e8d58d98\") " pod="kube-system/kindnet-v6dh4"
	Oct 27 22:36:53 old-k8s-version-908589 kubelet[1393]: I1027 22:36:53.305187    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e85ff7a5-d5a3-4eca-b969-465d08c1e022-xtables-lock\") pod \"kube-proxy-srms5\" (UID: \"e85ff7a5-d5a3-4eca-b969-465d08c1e022\") " pod="kube-system/kube-proxy-srms5"
	Oct 27 22:36:53 old-k8s-version-908589 kubelet[1393]: I1027 22:36:53.305223    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpk98\" (UniqueName: \"kubernetes.io/projected/e85ff7a5-d5a3-4eca-b969-465d08c1e022-kube-api-access-cpk98\") pod \"kube-proxy-srms5\" (UID: \"e85ff7a5-d5a3-4eca-b969-465d08c1e022\") " pod="kube-system/kube-proxy-srms5"
	Oct 27 22:36:53 old-k8s-version-908589 kubelet[1393]: I1027 22:36:53.305252    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e85ff7a5-d5a3-4eca-b969-465d08c1e022-lib-modules\") pod \"kube-proxy-srms5\" (UID: \"e85ff7a5-d5a3-4eca-b969-465d08c1e022\") " pod="kube-system/kube-proxy-srms5"
	Oct 27 22:36:53 old-k8s-version-908589 kubelet[1393]: I1027 22:36:53.305293    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhd9\" (UniqueName: \"kubernetes.io/projected/457d183d-2a92-418b-aecd-5b20e8d58d98-kube-api-access-wqhd9\") pod \"kindnet-v6dh4\" (UID: \"457d183d-2a92-418b-aecd-5b20e8d58d98\") " pod="kube-system/kindnet-v6dh4"
	Oct 27 22:36:53 old-k8s-version-908589 kubelet[1393]: I1027 22:36:53.305332    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e85ff7a5-d5a3-4eca-b969-465d08c1e022-kube-proxy\") pod \"kube-proxy-srms5\" (UID: \"e85ff7a5-d5a3-4eca-b969-465d08c1e022\") " pod="kube-system/kube-proxy-srms5"
	Oct 27 22:36:56 old-k8s-version-908589 kubelet[1393]: I1027 22:36:56.990495    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-v6dh4" podStartSLOduration=1.436472442 podCreationTimestamp="2025-10-27 22:36:53 +0000 UTC" firstStartedPulling="2025-10-27 22:36:53.578505229 +0000 UTC m=+12.737982628" lastFinishedPulling="2025-10-27 22:36:56.13247628 +0000 UTC m=+15.291953667" observedRunningTime="2025-10-27 22:36:56.990321317 +0000 UTC m=+16.149798722" watchObservedRunningTime="2025-10-27 22:36:56.990443481 +0000 UTC m=+16.149920889"
	Oct 27 22:36:56 old-k8s-version-908589 kubelet[1393]: I1027 22:36:56.990688    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-srms5" podStartSLOduration=3.990655834 podCreationTimestamp="2025-10-27 22:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:36:53.984094643 +0000 UTC m=+13.143572047" watchObservedRunningTime="2025-10-27 22:36:56.990655834 +0000 UTC m=+16.150133238"
	Oct 27 22:37:06 old-k8s-version-908589 kubelet[1393]: I1027 22:37:06.879907    1393 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 27 22:37:06 old-k8s-version-908589 kubelet[1393]: I1027 22:37:06.900072    1393 topology_manager.go:215] "Topology Admit Handler" podUID="02cfcc15-9ca3-459d-9151-b34ba21474a3" podNamespace="kube-system" podName="storage-provisioner"
	Oct 27 22:37:06 old-k8s-version-908589 kubelet[1393]: I1027 22:37:06.901667    1393 topology_manager.go:215] "Topology Admit Handler" podUID="bb1a9fac-9dcc-4267-8887-7d24c3f052c9" podNamespace="kube-system" podName="coredns-5dd5756b68-jwp99"
	Oct 27 22:37:07 old-k8s-version-908589 kubelet[1393]: I1027 22:37:07.008288    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvnx\" (UniqueName: \"kubernetes.io/projected/bb1a9fac-9dcc-4267-8887-7d24c3f052c9-kube-api-access-9qvnx\") pod \"coredns-5dd5756b68-jwp99\" (UID: \"bb1a9fac-9dcc-4267-8887-7d24c3f052c9\") " pod="kube-system/coredns-5dd5756b68-jwp99"
	Oct 27 22:37:07 old-k8s-version-908589 kubelet[1393]: I1027 22:37:07.008343    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb1a9fac-9dcc-4267-8887-7d24c3f052c9-config-volume\") pod \"coredns-5dd5756b68-jwp99\" (UID: \"bb1a9fac-9dcc-4267-8887-7d24c3f052c9\") " pod="kube-system/coredns-5dd5756b68-jwp99"
	Oct 27 22:37:07 old-k8s-version-908589 kubelet[1393]: I1027 22:37:07.008374    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kr6c\" (UniqueName: \"kubernetes.io/projected/02cfcc15-9ca3-459d-9151-b34ba21474a3-kube-api-access-5kr6c\") pod \"storage-provisioner\" (UID: \"02cfcc15-9ca3-459d-9151-b34ba21474a3\") " pod="kube-system/storage-provisioner"
	Oct 27 22:37:07 old-k8s-version-908589 kubelet[1393]: I1027 22:37:07.008497    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/02cfcc15-9ca3-459d-9151-b34ba21474a3-tmp\") pod \"storage-provisioner\" (UID: \"02cfcc15-9ca3-459d-9151-b34ba21474a3\") " pod="kube-system/storage-provisioner"
	Oct 27 22:37:08 old-k8s-version-908589 kubelet[1393]: I1027 22:37:08.015342    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jwp99" podStartSLOduration=15.015291645 podCreationTimestamp="2025-10-27 22:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:37:08.015035506 +0000 UTC m=+27.174512911" watchObservedRunningTime="2025-10-27 22:37:08.015291645 +0000 UTC m=+27.174769050"
	Oct 27 22:37:17 old-k8s-version-908589 kubelet[1393]: I1027 22:37:17.215570    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=24.215512103000002 podCreationTimestamp="2025-10-27 22:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:37:08.026871017 +0000 UTC m=+27.186348422" watchObservedRunningTime="2025-10-27 22:37:17.215512103 +0000 UTC m=+36.374989505"
	Oct 27 22:37:18 old-k8s-version-908589 kubelet[1393]: I1027 22:37:18.889713    1393 topology_manager.go:215] "Topology Admit Handler" podUID="903d9a95-da5b-48dd-9672-2c3ef418e1a8" podNamespace="default" podName="busybox"
	Oct 27 22:37:19 old-k8s-version-908589 kubelet[1393]: I1027 22:37:19.074264    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7ftn\" (UniqueName: \"kubernetes.io/projected/903d9a95-da5b-48dd-9672-2c3ef418e1a8-kube-api-access-b7ftn\") pod \"busybox\" (UID: \"903d9a95-da5b-48dd-9672-2c3ef418e1a8\") " pod="default/busybox"
	Oct 27 22:37:22 old-k8s-version-908589 kubelet[1393]: I1027 22:37:22.054669    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.9363614139999998 podCreationTimestamp="2025-10-27 22:37:18 +0000 UTC" firstStartedPulling="2025-10-27 22:37:19.231424511 +0000 UTC m=+38.390901898" lastFinishedPulling="2025-10-27 22:37:21.349678095 +0000 UTC m=+40.509155491" observedRunningTime="2025-10-27 22:37:22.054540816 +0000 UTC m=+41.214018220" watchObservedRunningTime="2025-10-27 22:37:22.054615007 +0000 UTC m=+41.214092411"
	
	
	==> storage-provisioner [1e797000c732493398ecb608e387c6db65c033d68759f65da9dc505ff9c62bb8] <==
	I1027 22:37:07.253193       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 22:37:07.261216       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 22:37:07.261269       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 22:37:07.267839       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 22:37:07.267972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a0abcea3-4af1-407b-918c-156849108be7", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-908589_f44d9fda-f1cc-4982-aa8f-29143ea47021 became leader
	I1027 22:37:07.268049       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908589_f44d9fda-f1cc-4982-aa8f-29143ea47021!
	I1027 22:37:07.369195       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908589_f44d9fda-f1cc-4982-aa8f-29143ea47021!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908589 -n old-k8s-version-908589
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-908589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-188814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-188814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (250.42385ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:38:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-188814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-188814 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-188814 describe deploy/metrics-server -n kube-system: exit status 1 (59.737274ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-188814 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-188814
helpers_test.go:243: (dbg) docker inspect no-preload-188814:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032",
	        "Created": "2025-10-27T22:37:08.821298922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 712352,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:37:08.864160098Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/hostname",
	        "HostsPath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/hosts",
	        "LogPath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032-json.log",
	        "Name": "/no-preload-188814",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-188814:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-188814",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032",
	                "LowerDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-188814",
	                "Source": "/var/lib/docker/volumes/no-preload-188814/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-188814",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-188814",
	                "name.minikube.sigs.k8s.io": "no-preload-188814",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9a074a67bf48613fd91b1dcd367da7b4ebf408b7166fe3c1ffc375903dab1df",
	            "SandboxKey": "/var/run/docker/netns/a9a074a67bf4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-188814": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:df:79:fd:e2:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae03ff1f23a640f11de7d6590557c58c27007a2db36f9f0148ee4c491af73383",
	                    "EndpointID": "a00a1b459e8709d83bc7079c6b14d42886d2f7ab79ef909537052d570ad88b0d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-188814",
	                        "5aadc4ee2b12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188814 -n no-preload-188814
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-188814 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-188814 logs -n 25: (1.046388665s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-293335 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo containerd config dump                                                                                                                                                                                                  │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ ssh     │ -p cilium-293335 sudo crio config                                                                                                                                                                                                             │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ delete  │ -p cilium-293335                                                                                                                                                                                                                              │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ delete  │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ ssh     │ -p NoKubernetes-565903 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ stop    │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-565903 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:37 UTC │
	│ ssh     │ -p NoKubernetes-565903 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ delete  │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-908589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ stop    │ -p old-k8s-version-908589 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-908589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-188814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ start   │ -p cert-expiration-219241 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-219241 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:38:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:38:08.727974  722077 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:38:08.728284  722077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:38:08.728288  722077 out.go:374] Setting ErrFile to fd 2...
	I1027 22:38:08.728291  722077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:38:08.728506  722077 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:38:08.728998  722077 out.go:368] Setting JSON to false
	I1027 22:38:08.730497  722077 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8428,"bootTime":1761596261,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:38:08.730578  722077 start.go:143] virtualization: kvm guest
	I1027 22:38:08.735312  722077 out.go:179] * [cert-expiration-219241] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:38:08.736638  722077 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:38:08.736647  722077 notify.go:221] Checking for updates...
	I1027 22:38:08.738625  722077 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:38:08.740035  722077 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:38:08.741314  722077 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:38:08.742341  722077 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:38:08.743305  722077 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:38:08.748005  722077 config.go:182] Loaded profile config "cert-expiration-219241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:08.748732  722077 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:38:08.772096  722077 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:38:08.772168  722077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:38:08.834448  722077 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-27 22:38:08.822511207 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:38:08.834546  722077 docker.go:318] overlay module found
	I1027 22:38:08.836764  722077 out.go:179] * Using the docker driver based on existing profile
	
	
	==> CRI-O <==
	Oct 27 22:37:56 no-preload-188814 crio[776]: time="2025-10-27T22:37:56.930267502Z" level=info msg="Started container" PID=2944 containerID=802e321e9c3693a26dbed55609488d0f7f5caa0b5c21258bd4df39e604c616b3 description=kube-system/storage-provisioner/storage-provisioner id=b263ac87-1af1-4bce-93dc-3a06fa12b348 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e31955c890f7a5194d085591a7449a950850f444d3c93362f22f2fad90a7b6d
	Oct 27 22:37:56 no-preload-188814 crio[776]: time="2025-10-27T22:37:56.930873292Z" level=info msg="Started container" PID=2945 containerID=5c758d37216070177d2db16f395cd19d17d7b30885205dc247fb32e88f973a86 description=kube-system/coredns-66bc5c9577-m8lfc/coredns id=0c7af39d-c45f-487f-a57e-646bcd0e23e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c604657cc5fbe99b1096e7029f944dde99e7abacccc4b4aa1f890a730819e23
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.330563743Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ca62bd87-d655-40c8-b6b4-1a5322473db0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.330681107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.335535896Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ae478a61ef9cb443bfb87cb6c54d2fdf6e0dcdcb6b5560ae0c05a55674401f2d UID:9683c10c-e747-4fa9-9007-4f2974e50e4e NetNS:/var/run/netns/f34344d1-213a-44ac-8842-67017e4b053d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008afa8}] Aliases:map[]}"
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.335567746Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.344752519Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ae478a61ef9cb443bfb87cb6c54d2fdf6e0dcdcb6b5560ae0c05a55674401f2d UID:9683c10c-e747-4fa9-9007-4f2974e50e4e NetNS:/var/run/netns/f34344d1-213a-44ac-8842-67017e4b053d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008afa8}] Aliases:map[]}"
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.344883086Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.345726619Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.347000317Z" level=info msg="Ran pod sandbox ae478a61ef9cb443bfb87cb6c54d2fdf6e0dcdcb6b5560ae0c05a55674401f2d with infra container: default/busybox/POD" id=ca62bd87-d655-40c8-b6b4-1a5322473db0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.348116375Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a604a872-96ae-46c0-bf9f-d1b73916fd3c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.348247381Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a604a872-96ae-46c0-bf9f-d1b73916fd3c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.348282443Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a604a872-96ae-46c0-bf9f-d1b73916fd3c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.348781748Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=428efbf6-c82a-48cf-a1ef-40911b257195 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:37:59 no-preload-188814 crio[776]: time="2025-10-27T22:37:59.352439791Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 22:38:01 no-preload-188814 crio[776]: time="2025-10-27T22:38:01.469698127Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=428efbf6-c82a-48cf-a1ef-40911b257195 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:38:01 no-preload-188814 crio[776]: time="2025-10-27T22:38:01.470339227Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5d9addec-ea49-4e5d-abbc-3103baedfe10 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:01 no-preload-188814 crio[776]: time="2025-10-27T22:38:01.471705926Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=739a6408-1a59-479d-a268-aec2efe61c6f name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:01 no-preload-188814 crio[776]: time="2025-10-27T22:38:01.474828031Z" level=info msg="Creating container: default/busybox/busybox" id=4735c25a-4c5f-49fe-8e51-4855b2a89f96 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:01 no-preload-188814 crio[776]: time="2025-10-27T22:38:01.474995965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:01 no-preload-188814 crio[776]: time="2025-10-27T22:38:01.478756537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:01 no-preload-188814 crio[776]: time="2025-10-27T22:38:01.479372123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:01 no-preload-188814 crio[776]: time="2025-10-27T22:38:01.514141333Z" level=info msg="Created container cd3dff86ff587a34d3cc2dfa426b243baebd5905956bbcd7fe5d276618a51db3: default/busybox/busybox" id=4735c25a-4c5f-49fe-8e51-4855b2a89f96 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:01 no-preload-188814 crio[776]: time="2025-10-27T22:38:01.51483554Z" level=info msg="Starting container: cd3dff86ff587a34d3cc2dfa426b243baebd5905956bbcd7fe5d276618a51db3" id=e711431a-73d4-4534-bf29-7f722a5ed76a name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:38:01 no-preload-188814 crio[776]: time="2025-10-27T22:38:01.516782174Z" level=info msg="Started container" PID=3017 containerID=cd3dff86ff587a34d3cc2dfa426b243baebd5905956bbcd7fe5d276618a51db3 description=default/busybox/busybox id=e711431a-73d4-4534-bf29-7f722a5ed76a name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae478a61ef9cb443bfb87cb6c54d2fdf6e0dcdcb6b5560ae0c05a55674401f2d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cd3dff86ff587       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   ae478a61ef9cb       busybox                                     default
	5c758d3721607       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   3c604657cc5fb       coredns-66bc5c9577-m8lfc                    kube-system
	802e321e9c369       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   1e31955c890f7       storage-provisioner                         kube-system
	09478f27ab7c7       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   4e588cac0cb6b       kindnet-thlc6                               kube-system
	23a3d494fad83       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      26 seconds ago      Running             kube-proxy                0                   158e2cfcdce88       kube-proxy-4nwvc                            kube-system
	c4c9950ff8c05       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   22aacdb7dc47e       kube-controller-manager-no-preload-188814   kube-system
	330a72c36f3ec       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   431aacf805ab4       kube-scheduler-no-preload-188814            kube-system
	8276c149f3e9e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   1d42e9ee0f0c2       etcd-no-preload-188814                      kube-system
	febfd8e84c8d4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   5a0802b74c26a       kube-apiserver-no-preload-188814            kube-system
	
	
	==> coredns [5c758d37216070177d2db16f395cd19d17d7b30885205dc247fb32e88f973a86] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56230 - 13149 "HINFO IN 6930761051627848925.3162650180613598843. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033834365s
	
	
	==> describe nodes <==
	Name:               no-preload-188814
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-188814
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=no-preload-188814
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_37_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:37:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-188814
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:38:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:38:07 +0000   Mon, 27 Oct 2025 22:37:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:38:07 +0000   Mon, 27 Oct 2025 22:37:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:38:07 +0000   Mon, 27 Oct 2025 22:37:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:38:07 +0000   Mon, 27 Oct 2025 22:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-188814
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                9b25c6cb-fee1-43be-8dc1-88bc737c041a
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-m8lfc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-188814                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-thlc6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-188814             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-188814    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-4nwvc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-188814             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node no-preload-188814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node no-preload-188814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node no-preload-188814 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node no-preload-188814 event: Registered Node no-preload-188814 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-188814 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [8276c149f3e9e4c5f98f430007aa1cca4928470140b2f17072afb0a009ac392c] <==
	{"level":"warn","ts":"2025-10-27T22:37:34.006348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.014786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.021848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.027649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.034877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.041328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.048390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.054515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.061774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.080109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.086769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.092807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.098727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.105204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.111391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.118759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.124559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.135982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.141781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.148370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.165128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.168142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.173898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.179635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:37:34.225224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45730","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:38:09 up  2:20,  0 user,  load average: 2.07, 2.22, 2.60
	Linux no-preload-188814 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [09478f27ab7c7f9f1d13818c69af605779bb305256d38d41afb4470dafabd2ed] <==
	I1027 22:37:46.208541       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:37:46.208781       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1027 22:37:46.208893       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:37:46.208909       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:37:46.208936       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:37:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:37:46.414599       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:37:46.414645       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:37:46.414658       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:37:46.414827       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:37:46.815140       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:37:46.815166       1 metrics.go:72] Registering metrics
	I1027 22:37:46.815209       1 controller.go:711] "Syncing nftables rules"
	I1027 22:37:56.418074       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:37:56.418127       1 main.go:301] handling current node
	I1027 22:38:06.418473       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:38:06.418537       1 main.go:301] handling current node
	
	
	==> kube-apiserver [febfd8e84c8d4d68a7009867a749c4ff1f4a4a9d83e717290f712657c4efa310] <==
	E1027 22:37:34.754485       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1027 22:37:34.803746       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:37:34.804208       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 22:37:34.804341       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:37:34.808575       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:37:34.808854       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 22:37:34.899147       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:37:35.607612       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 22:37:35.612307       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 22:37:35.612322       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:37:36.076147       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:37:36.112111       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:37:36.208470       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 22:37:36.217195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1027 22:37:36.218287       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:37:36.222971       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:37:36.624075       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:37:37.285231       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:37:37.292810       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 22:37:37.299073       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:37:42.278325       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:37:42.281994       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:37:42.377334       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 22:37:42.575346       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1027 22:38:08.116779       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:39408: use of closed network connection
	
	
	==> kube-controller-manager [c4c9950ff8c0550b26de1b843575cd51a487e38d92eca3c26ac61591110b8803] <==
	I1027 22:37:41.622915       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:37:41.623054       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 22:37:41.623067       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 22:37:41.624273       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 22:37:41.624291       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 22:37:41.624346       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:37:41.624357       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 22:37:41.624360       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 22:37:41.624375       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 22:37:41.624391       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 22:37:41.624358       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 22:37:41.624349       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:37:41.624899       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:37:41.624936       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 22:37:41.627264       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 22:37:41.627317       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 22:37:41.628497       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 22:37:41.628516       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 22:37:41.628566       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 22:37:41.629748       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:37:41.630838       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 22:37:41.636083       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 22:37:41.642305       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 22:37:41.647612       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:37:56.574584       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [23a3d494fad8338a831c8246db48659a3ba167413f9a74e1e45f70f846d2f7f4] <==
	I1027 22:37:43.381658       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:37:43.451710       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:37:43.552456       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:37:43.552489       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1027 22:37:43.552575       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:37:43.570426       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:37:43.570474       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:37:43.575302       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:37:43.575651       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:37:43.575688       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:37:43.576927       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:37:43.576968       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:37:43.577027       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:37:43.577045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:37:43.577095       1 config.go:309] "Starting node config controller"
	I1027 22:37:43.577149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:37:43.577159       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:37:43.577098       1 config.go:200] "Starting service config controller"
	I1027 22:37:43.577211       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:37:43.677079       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:37:43.677211       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:37:43.677307       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [330a72c36f3ecc4375771006dfa51160a53c73c4cb8ab83b79e75d1b132bc8e5] <==
	E1027 22:37:34.649423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:37:34.649534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:37:34.650042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:37:34.650133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:37:34.650246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:37:34.650349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:37:34.650437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:37:34.650552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:37:34.650671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:37:34.650815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:37:34.650971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:37:34.651093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 22:37:34.651106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:37:34.651535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:37:34.651595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:37:34.651743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:37:34.652083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:37:35.494077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:37:35.503349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:37:35.655168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:37:35.666375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 22:37:35.699539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:37:35.703636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:37:35.710776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1027 22:37:38.546883       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: I1027 22:37:42.427453    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9-lib-modules\") pod \"kindnet-thlc6\" (UID: \"9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9\") " pod="kube-system/kindnet-thlc6"
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: I1027 22:37:42.427478    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9-cni-cfg\") pod \"kindnet-thlc6\" (UID: \"9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9\") " pod="kube-system/kindnet-thlc6"
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: I1027 22:37:42.427494    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9-xtables-lock\") pod \"kindnet-thlc6\" (UID: \"9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9\") " pod="kube-system/kindnet-thlc6"
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: I1027 22:37:42.427510    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a82e59ec-7ef7-46aa-a9d3-64a1f8af2222-xtables-lock\") pod \"kube-proxy-4nwvc\" (UID: \"a82e59ec-7ef7-46aa-a9d3-64a1f8af2222\") " pod="kube-system/kube-proxy-4nwvc"
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: I1027 22:37:42.427533    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a82e59ec-7ef7-46aa-a9d3-64a1f8af2222-lib-modules\") pod \"kube-proxy-4nwvc\" (UID: \"a82e59ec-7ef7-46aa-a9d3-64a1f8af2222\") " pod="kube-system/kube-proxy-4nwvc"
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: I1027 22:37:42.427563    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bfmx\" (UniqueName: \"kubernetes.io/projected/9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9-kube-api-access-6bfmx\") pod \"kindnet-thlc6\" (UID: \"9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9\") " pod="kube-system/kindnet-thlc6"
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: I1027 22:37:42.427587    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a82e59ec-7ef7-46aa-a9d3-64a1f8af2222-kube-proxy\") pod \"kube-proxy-4nwvc\" (UID: \"a82e59ec-7ef7-46aa-a9d3-64a1f8af2222\") " pod="kube-system/kube-proxy-4nwvc"
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: E1027 22:37:42.534805    2331 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: E1027 22:37:42.534838    2331 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: E1027 22:37:42.534887    2331 projected.go:196] Error preparing data for projected volume kube-api-access-6bfmx for pod kube-system/kindnet-thlc6: configmap "kube-root-ca.crt" not found
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: E1027 22:37:42.534846    2331 projected.go:196] Error preparing data for projected volume kube-api-access-vk8fj for pod kube-system/kube-proxy-4nwvc: configmap "kube-root-ca.crt" not found
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: E1027 22:37:42.534988    2331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9-kube-api-access-6bfmx podName:9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9 nodeName:}" failed. No retries permitted until 2025-10-27 22:37:43.034939721 +0000 UTC m=+6.008147458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6bfmx" (UniqueName: "kubernetes.io/projected/9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9-kube-api-access-6bfmx") pod "kindnet-thlc6" (UID: "9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9") : configmap "kube-root-ca.crt" not found
	Oct 27 22:37:42 no-preload-188814 kubelet[2331]: E1027 22:37:42.535042    2331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a82e59ec-7ef7-46aa-a9d3-64a1f8af2222-kube-api-access-vk8fj podName:a82e59ec-7ef7-46aa-a9d3-64a1f8af2222 nodeName:}" failed. No retries permitted until 2025-10-27 22:37:43.03502291 +0000 UTC m=+6.008230634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vk8fj" (UniqueName: "kubernetes.io/projected/a82e59ec-7ef7-46aa-a9d3-64a1f8af2222-kube-api-access-vk8fj") pod "kube-proxy-4nwvc" (UID: "a82e59ec-7ef7-46aa-a9d3-64a1f8af2222") : configmap "kube-root-ca.crt" not found
	Oct 27 22:37:44 no-preload-188814 kubelet[2331]: I1027 22:37:44.141970    2331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4nwvc" podStartSLOduration=2.141954567 podStartE2EDuration="2.141954567s" podCreationTimestamp="2025-10-27 22:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:37:44.14174039 +0000 UTC m=+7.114948143" watchObservedRunningTime="2025-10-27 22:37:44.141954567 +0000 UTC m=+7.115162306"
	Oct 27 22:37:46 no-preload-188814 kubelet[2331]: I1027 22:37:46.149086    2331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-thlc6" podStartSLOduration=1.468115066 podStartE2EDuration="4.149062389s" podCreationTimestamp="2025-10-27 22:37:42 +0000 UTC" firstStartedPulling="2025-10-27 22:37:43.305015858 +0000 UTC m=+6.278223593" lastFinishedPulling="2025-10-27 22:37:45.985963194 +0000 UTC m=+8.959170916" observedRunningTime="2025-10-27 22:37:46.14891638 +0000 UTC m=+9.122124134" watchObservedRunningTime="2025-10-27 22:37:46.149062389 +0000 UTC m=+9.122270131"
	Oct 27 22:37:56 no-preload-188814 kubelet[2331]: I1027 22:37:56.525351    2331 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 22:37:56 no-preload-188814 kubelet[2331]: I1027 22:37:56.638038    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/486551a5-b1eb-4fb1-8f1e-ba4a945a2791-config-volume\") pod \"coredns-66bc5c9577-m8lfc\" (UID: \"486551a5-b1eb-4fb1-8f1e-ba4a945a2791\") " pod="kube-system/coredns-66bc5c9577-m8lfc"
	Oct 27 22:37:56 no-preload-188814 kubelet[2331]: I1027 22:37:56.638081    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p92zk\" (UniqueName: \"kubernetes.io/projected/486551a5-b1eb-4fb1-8f1e-ba4a945a2791-kube-api-access-p92zk\") pod \"coredns-66bc5c9577-m8lfc\" (UID: \"486551a5-b1eb-4fb1-8f1e-ba4a945a2791\") " pod="kube-system/coredns-66bc5c9577-m8lfc"
	Oct 27 22:37:56 no-preload-188814 kubelet[2331]: I1027 22:37:56.638105    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9bd12118-14fd-4ef6-a0f1-dd7130601f49-tmp\") pod \"storage-provisioner\" (UID: \"9bd12118-14fd-4ef6-a0f1-dd7130601f49\") " pod="kube-system/storage-provisioner"
	Oct 27 22:37:56 no-preload-188814 kubelet[2331]: I1027 22:37:56.638118    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxjx\" (UniqueName: \"kubernetes.io/projected/9bd12118-14fd-4ef6-a0f1-dd7130601f49-kube-api-access-lmxjx\") pod \"storage-provisioner\" (UID: \"9bd12118-14fd-4ef6-a0f1-dd7130601f49\") " pod="kube-system/storage-provisioner"
	Oct 27 22:37:57 no-preload-188814 kubelet[2331]: I1027 22:37:57.191075    2331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.191054145 podStartE2EDuration="14.191054145s" podCreationTimestamp="2025-10-27 22:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:37:57.181444569 +0000 UTC m=+20.154652311" watchObservedRunningTime="2025-10-27 22:37:57.191054145 +0000 UTC m=+20.164261884"
	Oct 27 22:37:57 no-preload-188814 kubelet[2331]: I1027 22:37:57.191501    2331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-m8lfc" podStartSLOduration=15.191486695 podStartE2EDuration="15.191486695s" podCreationTimestamp="2025-10-27 22:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:37:57.191341358 +0000 UTC m=+20.164549100" watchObservedRunningTime="2025-10-27 22:37:57.191486695 +0000 UTC m=+20.164694436"
	Oct 27 22:37:59 no-preload-188814 kubelet[2331]: I1027 22:37:59.151546    2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgmp9\" (UniqueName: \"kubernetes.io/projected/9683c10c-e747-4fa9-9007-4f2974e50e4e-kube-api-access-lgmp9\") pod \"busybox\" (UID: \"9683c10c-e747-4fa9-9007-4f2974e50e4e\") " pod="default/busybox"
	Oct 27 22:38:02 no-preload-188814 kubelet[2331]: I1027 22:38:02.194527    2331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.071838308 podStartE2EDuration="3.194503911s" podCreationTimestamp="2025-10-27 22:37:59 +0000 UTC" firstStartedPulling="2025-10-27 22:37:59.348482721 +0000 UTC m=+22.321690446" lastFinishedPulling="2025-10-27 22:38:01.471148326 +0000 UTC m=+24.444356049" observedRunningTime="2025-10-27 22:38:02.194415126 +0000 UTC m=+25.167623052" watchObservedRunningTime="2025-10-27 22:38:02.194503911 +0000 UTC m=+25.167711653"
	Oct 27 22:38:08 no-preload-188814 kubelet[2331]: E1027 22:38:08.116691    2331 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60366->127.0.0.1:36673: write tcp 127.0.0.1:60366->127.0.0.1:36673: write: broken pipe
	
	
	==> storage-provisioner [802e321e9c3693a26dbed55609488d0f7f5caa0b5c21258bd4df39e604c616b3] <==
	I1027 22:37:56.953238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 22:37:56.964353       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 22:37:56.964416       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 22:37:56.967022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:37:56.972343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 22:37:56.972600       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 22:37:56.972720       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8244fb6-6076-4c58-b28a-71039405fb52", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-188814_68fe0f99-9b69-4abf-a651-e82a39d646c4 became leader
	I1027 22:37:56.972986       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-188814_68fe0f99-9b69-4abf-a651-e82a39d646c4!
	W1027 22:37:56.975789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:37:56.986637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 22:37:57.073285       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-188814_68fe0f99-9b69-4abf-a651-e82a39d646c4!
	W1027 22:37:58.990838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:37:58.995154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:00.999974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:01.004547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:03.007256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:03.010886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:05.014329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:05.018104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:07.021293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:07.024826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:09.028810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:09.039825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-188814 -n no-preload-188814
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-188814 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-908589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-908589 --alsologtostderr -v=1: exit status 80 (1.74626975s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-908589 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:38:44.795176  730876 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:38:44.795454  730876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:38:44.795466  730876 out.go:374] Setting ErrFile to fd 2...
	I1027 22:38:44.795472  730876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:38:44.795757  730876 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:38:44.796122  730876 out.go:368] Setting JSON to false
	I1027 22:38:44.796177  730876 mustload.go:66] Loading cluster: old-k8s-version-908589
	I1027 22:38:44.798604  730876 config.go:182] Loaded profile config "old-k8s-version-908589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 22:38:44.799433  730876 cli_runner.go:164] Run: docker container inspect old-k8s-version-908589 --format={{.State.Status}}
	I1027 22:38:44.819322  730876 host.go:66] Checking if "old-k8s-version-908589" exists ...
	I1027 22:38:44.819673  730876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:38:44.887911  730876 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-27 22:38:44.874157236 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:38:44.888771  730876 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-908589 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 22:38:44.891498  730876 out.go:179] * Pausing node old-k8s-version-908589 ... 
	I1027 22:38:44.892988  730876 host.go:66] Checking if "old-k8s-version-908589" exists ...
	I1027 22:38:44.893349  730876 ssh_runner.go:195] Run: systemctl --version
	I1027 22:38:44.893409  730876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-908589
	I1027 22:38:44.915668  730876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/old-k8s-version-908589/id_rsa Username:docker}
	I1027 22:38:45.028554  730876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:38:45.044679  730876 pause.go:52] kubelet running: true
	I1027 22:38:45.044749  730876 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:38:45.224369  730876 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:38:45.224468  730876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:38:45.303172  730876 cri.go:89] found id: "a5bda7727c540811b4409b8ecc67d9d385823f5aa5de84580883039c1baf1935"
	I1027 22:38:45.303196  730876 cri.go:89] found id: "21430bbf8df99df3b9a23d0e6400e2be25bca17ae542da44b69472a011a78162"
	I1027 22:38:45.303200  730876 cri.go:89] found id: "ec59b02f91c0b8777c448403a25b84492d518f669cf7e6d1d62914de1ae6d861"
	I1027 22:38:45.303204  730876 cri.go:89] found id: "40b5ad6840c82eefedf9a6e76bbd8c07fa3d649ed396affb792017c3f80126e6"
	I1027 22:38:45.303208  730876 cri.go:89] found id: "f4690cc69163d663fdab691358519ee0401aa190792f240348c12d39a643e5f5"
	I1027 22:38:45.303213  730876 cri.go:89] found id: "6cdce94a5f78b08c7fa45e7720dfbf6930fe756536de03ceb5d36d0124ee1c23"
	I1027 22:38:45.303216  730876 cri.go:89] found id: "e64b44ab53a02f28c14e5582dc7be12f197b4831f11356e8d5c51aa28e9eff8e"
	I1027 22:38:45.303220  730876 cri.go:89] found id: "0552ed0e96ff667dac3ef7da44469e9aecf41285625ff22fbc94d09f10ebe42a"
	I1027 22:38:45.303224  730876 cri.go:89] found id: "e61d7b54f2b00d9f3cc449906592240dfddbc082a601333546e64cbf3aab5c08"
	I1027 22:38:45.303239  730876 cri.go:89] found id: "f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc"
	I1027 22:38:45.303243  730876 cri.go:89] found id: "3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c"
	I1027 22:38:45.303247  730876 cri.go:89] found id: ""
	I1027 22:38:45.303292  730876 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:38:45.316542  730876 retry.go:31] will retry after 320.209815ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:38:45Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:38:45.637086  730876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:38:45.651129  730876 pause.go:52] kubelet running: false
	I1027 22:38:45.651182  730876 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:38:45.804015  730876 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:38:45.804101  730876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:38:45.878886  730876 cri.go:89] found id: "a5bda7727c540811b4409b8ecc67d9d385823f5aa5de84580883039c1baf1935"
	I1027 22:38:45.878915  730876 cri.go:89] found id: "21430bbf8df99df3b9a23d0e6400e2be25bca17ae542da44b69472a011a78162"
	I1027 22:38:45.878920  730876 cri.go:89] found id: "ec59b02f91c0b8777c448403a25b84492d518f669cf7e6d1d62914de1ae6d861"
	I1027 22:38:45.878925  730876 cri.go:89] found id: "40b5ad6840c82eefedf9a6e76bbd8c07fa3d649ed396affb792017c3f80126e6"
	I1027 22:38:45.878929  730876 cri.go:89] found id: "f4690cc69163d663fdab691358519ee0401aa190792f240348c12d39a643e5f5"
	I1027 22:38:45.878934  730876 cri.go:89] found id: "6cdce94a5f78b08c7fa45e7720dfbf6930fe756536de03ceb5d36d0124ee1c23"
	I1027 22:38:45.878938  730876 cri.go:89] found id: "e64b44ab53a02f28c14e5582dc7be12f197b4831f11356e8d5c51aa28e9eff8e"
	I1027 22:38:45.878959  730876 cri.go:89] found id: "0552ed0e96ff667dac3ef7da44469e9aecf41285625ff22fbc94d09f10ebe42a"
	I1027 22:38:45.878964  730876 cri.go:89] found id: "e61d7b54f2b00d9f3cc449906592240dfddbc082a601333546e64cbf3aab5c08"
	I1027 22:38:45.878973  730876 cri.go:89] found id: "f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc"
	I1027 22:38:45.878978  730876 cri.go:89] found id: "3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c"
	I1027 22:38:45.878982  730876 cri.go:89] found id: ""
	I1027 22:38:45.879030  730876 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:38:45.894294  730876 retry.go:31] will retry after 291.23675ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:38:45Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:38:46.186002  730876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:38:46.206074  730876 pause.go:52] kubelet running: false
	I1027 22:38:46.206138  730876 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:38:46.367588  730876 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:38:46.367684  730876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:38:46.436299  730876 cri.go:89] found id: "a5bda7727c540811b4409b8ecc67d9d385823f5aa5de84580883039c1baf1935"
	I1027 22:38:46.436321  730876 cri.go:89] found id: "21430bbf8df99df3b9a23d0e6400e2be25bca17ae542da44b69472a011a78162"
	I1027 22:38:46.436325  730876 cri.go:89] found id: "ec59b02f91c0b8777c448403a25b84492d518f669cf7e6d1d62914de1ae6d861"
	I1027 22:38:46.436328  730876 cri.go:89] found id: "40b5ad6840c82eefedf9a6e76bbd8c07fa3d649ed396affb792017c3f80126e6"
	I1027 22:38:46.436331  730876 cri.go:89] found id: "f4690cc69163d663fdab691358519ee0401aa190792f240348c12d39a643e5f5"
	I1027 22:38:46.436334  730876 cri.go:89] found id: "6cdce94a5f78b08c7fa45e7720dfbf6930fe756536de03ceb5d36d0124ee1c23"
	I1027 22:38:46.436337  730876 cri.go:89] found id: "e64b44ab53a02f28c14e5582dc7be12f197b4831f11356e8d5c51aa28e9eff8e"
	I1027 22:38:46.436339  730876 cri.go:89] found id: "0552ed0e96ff667dac3ef7da44469e9aecf41285625ff22fbc94d09f10ebe42a"
	I1027 22:38:46.436342  730876 cri.go:89] found id: "e61d7b54f2b00d9f3cc449906592240dfddbc082a601333546e64cbf3aab5c08"
	I1027 22:38:46.436354  730876 cri.go:89] found id: "f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc"
	I1027 22:38:46.436357  730876 cri.go:89] found id: "3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c"
	I1027 22:38:46.436360  730876 cri.go:89] found id: ""
	I1027 22:38:46.436396  730876 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:38:46.451005  730876 out.go:203] 
	W1027 22:38:46.452111  730876 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:38:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:38:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:38:46.452127  730876 out.go:285] * 
	* 
	W1027 22:38:46.456094  730876 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:38:46.457005  730876 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-908589 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-908589
helpers_test.go:243: (dbg) docker inspect old-k8s-version-908589:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b",
	        "Created": "2025-10-27T22:36:26.560709331Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 718905,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:37:47.062330833Z",
	            "FinishedAt": "2025-10-27T22:37:45.948442115Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/hosts",
	        "LogPath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b-json.log",
	        "Name": "/old-k8s-version-908589",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-908589:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-908589",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b",
	                "LowerDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-908589",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-908589/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-908589",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-908589",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-908589",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b247857440003b4f72b816ca9c6393459d0a2cb7e49a4cc53fe57e2a90f88f0f",
	            "SandboxKey": "/var/run/docker/netns/b24785744000",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-908589": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:0a:72:ff:b2:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "798a573c50beee8a1800510c05b8fefb38677fa31ecba8e611494c61259bbf2b",
	                    "EndpointID": "fa72012937bdd81b6e188415f8b09ba348d8cabc0b02e0c5a1b483db80dd873a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-908589",
	                        "2d571bec60f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908589 -n old-k8s-version-908589
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908589 -n old-k8s-version-908589: exit status 2 (358.444237ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-908589 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-908589 logs -n 25: (1.577798492s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-293335 sudo crio config                                                                                                                                                                                                             │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ delete  │ -p cilium-293335                                                                                                                                                                                                                              │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ delete  │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ ssh     │ -p NoKubernetes-565903 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ stop    │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-565903 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:37 UTC │
	│ ssh     │ -p NoKubernetes-565903 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ delete  │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-908589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ stop    │ -p old-k8s-version-908589 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-908589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-188814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ start   │ -p cert-expiration-219241 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-219241 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ stop    │ -p no-preload-188814 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p cert-expiration-219241                                                                                                                                                                                                                     │ cert-expiration-219241 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976     │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-188814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ image   │ old-k8s-version-908589 image list --format=json                                                                                                                                                                                               │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ pause   │ -p old-k8s-version-908589 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:38:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:38:29.130543  726897 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:38:29.130850  726897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:38:29.130862  726897 out.go:374] Setting ErrFile to fd 2...
	I1027 22:38:29.130868  726897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:38:29.131127  726897 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:38:29.131644  726897 out.go:368] Setting JSON to false
	I1027 22:38:29.132745  726897 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8448,"bootTime":1761596261,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:38:29.132843  726897 start.go:143] virtualization: kvm guest
	I1027 22:38:29.134751  726897 out.go:179] * [no-preload-188814] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:38:29.135954  726897 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:38:29.135994  726897 notify.go:221] Checking for updates...
	I1027 22:38:29.137997  726897 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:38:29.139392  726897 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:38:29.141002  726897 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:38:29.142124  726897 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:38:29.143198  726897 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:38:29.144599  726897 config.go:182] Loaded profile config "no-preload-188814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:29.145315  726897 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:38:29.168555  726897 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:38:29.168639  726897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:38:29.227225  726897 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:38:29.216838075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:38:29.227381  726897 docker.go:318] overlay module found
	I1027 22:38:29.229119  726897 out.go:179] * Using the docker driver based on existing profile
	I1027 22:38:25.368790  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:27.727795  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:41328->192.168.76.2:8443: read: connection reset by peer
	I1027 22:38:27.727869  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:27.727924  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:27.757322  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:27.757346  682462 cri.go:89] found id: "c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe"
	I1027 22:38:27.757352  682462 cri.go:89] found id: ""
	I1027 22:38:27.757362  682462 logs.go:282] 2 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810 c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe]
	I1027 22:38:27.757408  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:27.761254  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:27.765308  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:27.765363  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:27.791839  682462 cri.go:89] found id: ""
	I1027 22:38:27.791864  682462 logs.go:282] 0 containers: []
	W1027 22:38:27.791872  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:27.791878  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:27.791929  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:27.819717  682462 cri.go:89] found id: ""
	I1027 22:38:27.819742  682462 logs.go:282] 0 containers: []
	W1027 22:38:27.819750  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:27.819756  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:27.819803  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:27.846197  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:27.846228  682462 cri.go:89] found id: ""
	I1027 22:38:27.846238  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:27.846290  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:27.850217  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:27.850280  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:27.877936  682462 cri.go:89] found id: ""
	I1027 22:38:27.877986  682462 logs.go:282] 0 containers: []
	W1027 22:38:27.877995  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:27.878001  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:27.878066  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:27.904709  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:27.904729  682462 cri.go:89] found id: "6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:27.904734  682462 cri.go:89] found id: ""
	I1027 22:38:27.904742  682462 logs.go:282] 2 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e]
	I1027 22:38:27.904794  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:27.908890  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:27.913882  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:27.913996  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:27.941560  682462 cri.go:89] found id: ""
	I1027 22:38:27.941582  682462 logs.go:282] 0 containers: []
	W1027 22:38:27.941589  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:27.941595  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:27.941650  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:27.968903  682462 cri.go:89] found id: ""
	I1027 22:38:27.968930  682462 logs.go:282] 0 containers: []
	W1027 22:38:27.968952  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:27.968978  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:27.968998  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:27.999830  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:27.999862  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:28.018932  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:28.018977  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:28.055565  682462 logs.go:123] Gathering logs for kube-controller-manager [6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e] ...
	I1027 22:38:28.055595  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:28.083081  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:28.083114  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:28.138465  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:28.138499  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:28.229142  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:28.229173  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:28.290207  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:28.290239  682462 logs.go:123] Gathering logs for kube-apiserver [c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe] ...
	I1027 22:38:28.290254  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe"
	I1027 22:38:28.323492  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:28.323519  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:28.374756  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:28.374779  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:29.230085  726897 start.go:307] selected driver: docker
	I1027 22:38:29.230098  726897 start.go:928] validating driver "docker" against &{Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:38:29.230214  726897 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:38:29.231011  726897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:38:29.291602  726897 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:38:29.281919651 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:38:29.291843  726897 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:38:29.291876  726897 cni.go:84] Creating CNI manager for ""
	I1027 22:38:29.291930  726897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:38:29.292027  726897 start.go:351] cluster config:
	{Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:38:29.293350  726897 out.go:179] * Starting "no-preload-188814" primary control-plane node in "no-preload-188814" cluster
	I1027 22:38:29.294199  726897 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:38:29.295405  726897 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:38:29.296574  726897 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:38:29.296666  726897 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:38:29.296713  726897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/config.json ...
	I1027 22:38:29.296851  726897 cache.go:107] acquiring lock: {Name:mk07939a87c1b452f98e2733b4044aaef5b7beb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.296903  726897 cache.go:107] acquiring lock: {Name:mk200c8a2caaaad3c8ed76649a48f615a1ae5be9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297003  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 22:38:29.296993  726897 cache.go:107] acquiring lock: {Name:mk7baa67397d0c68b56096a5558e51581596a4e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297015  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1027 22:38:29.296856  726897 cache.go:107] acquiring lock: {Name:mke466d23cdbe7dd8079b566141851102bac577e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297016  726897 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 181.457µs
	I1027 22:38:29.296996  726897 cache.go:107] acquiring lock: {Name:mk8b6b09ba52dfb608da0a36c4ec3530523b8436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297024  726897 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 145.972µs
	I1027 22:38:29.297043  726897 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 22:38:29.297044  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 22:38:29.297035  726897 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 22:38:29.297053  726897 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 65.933µs
	I1027 22:38:29.297061  726897 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 22:38:29.297052  726897 cache.go:107] acquiring lock: {Name:mkb0147fb3d8ecd8b50c6fd01f6ae7394f0cd687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297064  726897 cache.go:107] acquiring lock: {Name:mk413fcda2edd2da77552c9bdc2211a33f344da6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.296997  726897 cache.go:107] acquiring lock: {Name:mke2de66fafbe14869d74cc23f68775c4135be46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297086  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 22:38:29.297103  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 22:38:29.297103  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 22:38:29.297107  726897 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 260.614µs
	I1027 22:38:29.297114  726897 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 64.53µs
	I1027 22:38:29.297119  726897 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 55.973µs
	I1027 22:38:29.297126  726897 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 22:38:29.297129  726897 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 22:38:29.297119  726897 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 22:38:29.297167  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 22:38:29.297182  726897 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 257.332µs
	I1027 22:38:29.297195  726897 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 22:38:29.297260  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 22:38:29.297285  726897 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 347.911µs
	I1027 22:38:29.297301  726897 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 22:38:29.297313  726897 cache.go:87] Successfully saved all images to host disk.
	I1027 22:38:29.318241  726897 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:38:29.318258  726897 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:38:29.318274  726897 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:38:29.318295  726897 start.go:360] acquireMachinesLock for no-preload-188814: {Name:mkd09e7bc16b18c969a0e9138576a74468fd84c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.318343  726897 start.go:364] duration metric: took 33.301µs to acquireMachinesLock for "no-preload-188814"
	I1027 22:38:29.318359  726897 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:38:29.318364  726897 fix.go:55] fixHost starting: 
	I1027 22:38:29.318560  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:29.336530  726897 fix.go:113] recreateIfNeeded on no-preload-188814: state=Stopped err=<nil>
	W1027 22:38:29.336563  726897 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 22:38:29.041631  724915 ssh_runner.go:195] Run: cat /version.json
	I1027 22:38:29.041685  724915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:38:29.041697  724915 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:38:29.041777  724915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:38:29.060306  724915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:38:29.061144  724915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:38:29.223146  724915 ssh_runner.go:195] Run: systemctl --version
	I1027 22:38:29.230438  724915 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:38:29.271974  724915 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:38:29.277406  724915 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:38:29.277491  724915 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:38:29.304532  724915 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:38:29.304554  724915 start.go:496] detecting cgroup driver to use...
	I1027 22:38:29.304585  724915 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:38:29.304635  724915 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:38:29.322688  724915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:38:29.335744  724915 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:38:29.335786  724915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:38:29.352699  724915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:38:29.374182  724915 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:38:29.473914  724915 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:38:29.572856  724915 docker.go:234] disabling docker service ...
	I1027 22:38:29.572930  724915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:38:29.593073  724915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:38:29.606851  724915 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:38:29.696043  724915 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:38:29.785238  724915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:38:29.797842  724915 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:38:29.814936  724915 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:38:29.815044  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.826385  724915 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:38:29.826451  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.836549  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.845608  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.854195  724915 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:38:29.862106  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.870835  724915 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.887847  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.897744  724915 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:38:29.906837  724915 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:38:29.914659  724915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:30.001441  724915 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:38:30.109745  724915 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:38:30.109821  724915 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:38:30.115286  724915 start.go:564] Will wait 60s for crictl version
	I1027 22:38:30.115350  724915 ssh_runner.go:195] Run: which crictl
	I1027 22:38:30.119126  724915 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:38:30.145039  724915 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:38:30.145116  724915 ssh_runner.go:195] Run: crio --version
	I1027 22:38:30.173331  724915 ssh_runner.go:195] Run: crio --version
	I1027 22:38:30.203902  724915 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 22:38:27.031285  718696 pod_ready.go:104] pod "coredns-5dd5756b68-jwp99" is not "Ready", error: <nil>
	W1027 22:38:29.532236  718696 pod_ready.go:104] pod "coredns-5dd5756b68-jwp99" is not "Ready", error: <nil>
	I1027 22:38:30.531141  718696 pod_ready.go:94] pod "coredns-5dd5756b68-jwp99" is "Ready"
	I1027 22:38:30.531168  718696 pod_ready.go:86] duration metric: took 32.506010253s for pod "coredns-5dd5756b68-jwp99" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.534346  718696 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.538981  718696 pod_ready.go:94] pod "etcd-old-k8s-version-908589" is "Ready"
	I1027 22:38:30.539007  718696 pod_ready.go:86] duration metric: took 4.639408ms for pod "etcd-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.542102  718696 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.546641  718696 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-908589" is "Ready"
	I1027 22:38:30.546667  718696 pod_ready.go:86] duration metric: took 4.542766ms for pod "kube-apiserver-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.549707  718696 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.728780  718696 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-908589" is "Ready"
	I1027 22:38:30.728810  718696 pod_ready.go:86] duration metric: took 179.081738ms for pod "kube-controller-manager-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.930032  718696 pod_ready.go:83] waiting for pod "kube-proxy-srms5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:31.328332  718696 pod_ready.go:94] pod "kube-proxy-srms5" is "Ready"
	I1027 22:38:31.328363  718696 pod_ready.go:86] duration metric: took 398.305351ms for pod "kube-proxy-srms5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:31.529129  718696 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:31.928617  718696 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-908589" is "Ready"
	I1027 22:38:31.928639  718696 pod_ready.go:86] duration metric: took 399.480579ms for pod "kube-scheduler-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:31.928650  718696 pod_ready.go:40] duration metric: took 33.907493908s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:38:31.975577  718696 start.go:626] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1027 22:38:31.976850  718696 out.go:203] 
	W1027 22:38:31.977822  718696 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1027 22:38:31.978931  718696 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1027 22:38:31.980064  718696 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-908589" cluster and "default" namespace by default
	I1027 22:38:30.204927  724915 cli_runner.go:164] Run: docker network inspect embed-certs-829976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:38:30.221604  724915 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 22:38:30.225891  724915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:38:30.236363  724915 kubeadm.go:884] updating cluster {Name:embed-certs-829976 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:38:30.236509  724915 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:38:30.236571  724915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:38:30.270050  724915 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:38:30.270072  724915 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:38:30.270116  724915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:38:30.297812  724915 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:38:30.297838  724915 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:38:30.297848  724915 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 22:38:30.297976  724915 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-829976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:38:30.298057  724915 ssh_runner.go:195] Run: crio config
	I1027 22:38:30.344490  724915 cni.go:84] Creating CNI manager for ""
	I1027 22:38:30.344512  724915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:38:30.344532  724915 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:38:30.344559  724915 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-829976 NodeName:embed-certs-829976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:38:30.344710  724915 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-829976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:38:30.344783  724915 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:38:30.353227  724915 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:38:30.353300  724915 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:38:30.361260  724915 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 22:38:30.374089  724915 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:38:30.389216  724915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 22:38:30.401888  724915 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:38:30.405649  724915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:38:30.415760  724915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:30.495598  724915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:38:30.520529  724915 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976 for IP: 192.168.85.2
	I1027 22:38:30.520554  724915 certs.go:195] generating shared ca certs ...
	I1027 22:38:30.520571  724915 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:30.520726  724915 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:38:30.520771  724915 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:38:30.520782  724915 certs.go:257] generating profile certs ...
	I1027 22:38:30.520840  724915 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.key
	I1027 22:38:30.520853  724915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.crt with IP's: []
	I1027 22:38:31.042927  724915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.crt ...
	I1027 22:38:31.042965  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.crt: {Name:mk2a7ce6744a7951ad65a86fdb0b8152d6cec650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.043174  724915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.key ...
	I1027 22:38:31.043197  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.key: {Name:mk3891a0f4239ba078236dd177d4d9ba77cd835c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.043334  724915 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key.a2d2d0b7
	I1027 22:38:31.043353  724915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt.a2d2d0b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 22:38:31.342123  724915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt.a2d2d0b7 ...
	I1027 22:38:31.342154  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt.a2d2d0b7: {Name:mk99b26975ff00aeeefd15fbd54077d4849c8bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.342377  724915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key.a2d2d0b7 ...
	I1027 22:38:31.342403  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key.a2d2d0b7: {Name:mk7a32134132d91c1918a8248893a7cbcb723e69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.342541  724915 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt.a2d2d0b7 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt
	I1027 22:38:31.342651  724915 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key.a2d2d0b7 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key
	I1027 22:38:31.342713  724915 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key
	I1027 22:38:31.342730  724915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.crt with IP's: []
	I1027 22:38:31.811408  724915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.crt ...
	I1027 22:38:31.811440  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.crt: {Name:mkc0fe77cda16a3d91122f2526bdc4cddd7e68c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.811627  724915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key ...
	I1027 22:38:31.811640  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key: {Name:mk401c1734200a084964c7e10451a046e9211914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.811822  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:38:31.811863  724915 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:38:31.811873  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:38:31.811895  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:38:31.811917  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:38:31.811937  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:38:31.811991  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:38:31.812674  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:38:31.832806  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:38:31.851079  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:38:31.868789  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:38:31.886217  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 22:38:31.904131  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:38:31.921599  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:38:31.940340  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:38:31.959296  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:38:31.979333  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:38:32.000109  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:38:32.019258  724915 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:38:32.032622  724915 ssh_runner.go:195] Run: openssl version
	I1027 22:38:32.039169  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:38:32.048501  724915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:38:32.053317  724915 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:38:32.053374  724915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:38:32.094191  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:38:32.103714  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:38:32.112580  724915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:32.117102  724915 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:32.117150  724915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:32.152079  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:38:32.161391  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:38:32.170747  724915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:38:32.174592  724915 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:38:32.174647  724915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:38:32.211828  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:38:32.221710  724915 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:38:32.225577  724915 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:38:32.225645  724915 kubeadm.go:401] StartCluster: {Name:embed-certs-829976 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:38:32.225772  724915 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:38:32.225831  724915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:38:32.257620  724915 cri.go:89] found id: ""
	I1027 22:38:32.257700  724915 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:38:32.269595  724915 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:38:32.279233  724915 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:38:32.279296  724915 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:38:32.288311  724915 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:38:32.288348  724915 kubeadm.go:158] found existing configuration files:
	
	I1027 22:38:32.288394  724915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:38:32.297700  724915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:38:32.297763  724915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:38:32.305970  724915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:38:32.314247  724915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:38:32.314317  724915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:38:32.322128  724915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:38:32.330891  724915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:38:32.331000  724915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:38:32.339588  724915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:38:32.348810  724915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:38:32.348877  724915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:38:32.356640  724915 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:38:32.418192  724915 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 22:38:32.478367  724915 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:38:29.338766  726897 out.go:252] * Restarting existing docker container for "no-preload-188814" ...
	I1027 22:38:29.338851  726897 cli_runner.go:164] Run: docker start no-preload-188814
	I1027 22:38:29.598021  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:29.617797  726897 kic.go:430] container "no-preload-188814" state is running.
	I1027 22:38:29.618285  726897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-188814
	I1027 22:38:29.636150  726897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/config.json ...
	I1027 22:38:29.636406  726897 machine.go:94] provisionDockerMachine start ...
	I1027 22:38:29.636506  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:29.660669  726897 main.go:143] libmachine: Using SSH client type: native
	I1027 22:38:29.661015  726897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1027 22:38:29.661035  726897 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:38:29.661741  726897 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47260->127.0.0.1:33073: read: connection reset by peer
	I1027 22:38:32.804540  726897 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-188814
	
	I1027 22:38:32.804570  726897 ubuntu.go:182] provisioning hostname "no-preload-188814"
	I1027 22:38:32.804642  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:32.823996  726897 main.go:143] libmachine: Using SSH client type: native
	I1027 22:38:32.824301  726897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1027 22:38:32.824321  726897 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-188814 && echo "no-preload-188814" | sudo tee /etc/hostname
	I1027 22:38:32.978009  726897 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-188814
	
	I1027 22:38:32.978110  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:32.996457  726897 main.go:143] libmachine: Using SSH client type: native
	I1027 22:38:32.996709  726897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1027 22:38:32.996727  726897 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-188814' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-188814/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-188814' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:38:33.141263  726897 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:38:33.141295  726897 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:38:33.141342  726897 ubuntu.go:190] setting up certificates
	I1027 22:38:33.141361  726897 provision.go:84] configureAuth start
	I1027 22:38:33.141425  726897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-188814
	I1027 22:38:33.164147  726897 provision.go:143] copyHostCerts
	I1027 22:38:33.164221  726897 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:38:33.164250  726897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:38:33.164336  726897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:38:33.164460  726897 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:38:33.164475  726897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:38:33.164517  726897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:38:33.164607  726897 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:38:33.164622  726897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:38:33.164659  726897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:38:33.164727  726897 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.no-preload-188814 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-188814]
	I1027 22:38:33.422338  726897 provision.go:177] copyRemoteCerts
	I1027 22:38:33.422415  726897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:38:33.422472  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:33.441348  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:33.545310  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:38:33.564774  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:38:33.584116  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:38:33.603434  726897 provision.go:87] duration metric: took 462.05285ms to configureAuth
	I1027 22:38:33.603475  726897 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:38:33.603692  726897 config.go:182] Loaded profile config "no-preload-188814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:33.603817  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:33.623515  726897 main.go:143] libmachine: Using SSH client type: native
	I1027 22:38:33.623762  726897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1027 22:38:33.623777  726897 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:38:33.942205  726897 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:38:33.942239  726897 machine.go:97] duration metric: took 4.305804007s to provisionDockerMachine
	I1027 22:38:33.942258  726897 start.go:293] postStartSetup for "no-preload-188814" (driver="docker")
	I1027 22:38:33.942274  726897 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:38:33.942378  726897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:38:33.942436  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:33.963099  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:34.066504  726897 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:38:34.070536  726897 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:38:34.070566  726897 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:38:34.070579  726897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:38:34.070648  726897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:38:34.070753  726897 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:38:34.070868  726897 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:38:34.079610  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:38:34.101775  726897 start.go:296] duration metric: took 159.498893ms for postStartSetup
	I1027 22:38:34.101845  726897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:38:34.101912  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:34.122453  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:30.903174  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:30.903686  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:30.903744  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:30.903802  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:30.931429  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:30.931451  682462 cri.go:89] found id: ""
	I1027 22:38:30.931464  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:30.931531  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:30.935547  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:30.935612  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:30.962130  682462 cri.go:89] found id: ""
	I1027 22:38:30.962162  682462 logs.go:282] 0 containers: []
	W1027 22:38:30.962175  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:30.962188  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:30.962252  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:30.989785  682462 cri.go:89] found id: ""
	I1027 22:38:30.989808  682462 logs.go:282] 0 containers: []
	W1027 22:38:30.989817  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:30.989826  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:30.989885  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:31.017793  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:31.017815  682462 cri.go:89] found id: ""
	I1027 22:38:31.017823  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:31.017882  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:31.022130  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:31.022211  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:31.050645  682462 cri.go:89] found id: ""
	I1027 22:38:31.050671  682462 logs.go:282] 0 containers: []
	W1027 22:38:31.050683  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:31.050691  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:31.050743  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:31.081341  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:31.081367  682462 cri.go:89] found id: "6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:31.081372  682462 cri.go:89] found id: ""
	I1027 22:38:31.081382  682462 logs.go:282] 2 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e]
	I1027 22:38:31.081447  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:31.085582  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:31.089474  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:31.089550  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:31.116522  682462 cri.go:89] found id: ""
	I1027 22:38:31.116550  682462 logs.go:282] 0 containers: []
	W1027 22:38:31.116561  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:31.116579  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:31.116640  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:31.145803  682462 cri.go:89] found id: ""
	I1027 22:38:31.145831  682462 logs.go:282] 0 containers: []
	W1027 22:38:31.145843  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:31.145861  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:31.145876  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:31.166122  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:31.166161  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:31.205622  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:31.205661  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:31.233327  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:31.233357  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:31.293777  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:31.293812  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:31.328603  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:31.328640  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:31.420871  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:31.420909  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:31.483181  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:31.483210  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:31.483228  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:31.538306  682462 logs.go:123] Gathering logs for kube-controller-manager [6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e] ...
	I1027 22:38:31.538346  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:34.069181  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:34.069566  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:34.069620  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:34.069679  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:34.101353  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:34.101376  682462 cri.go:89] found id: ""
	I1027 22:38:34.101386  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:34.101458  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:34.106259  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:34.106333  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:34.138741  682462 cri.go:89] found id: ""
	I1027 22:38:34.138772  682462 logs.go:282] 0 containers: []
	W1027 22:38:34.138784  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:34.138792  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:34.138850  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:34.170189  682462 cri.go:89] found id: ""
	I1027 22:38:34.170214  682462 logs.go:282] 0 containers: []
	W1027 22:38:34.170222  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:34.170229  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:34.170280  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:34.201456  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:34.201482  682462 cri.go:89] found id: ""
	I1027 22:38:34.201494  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:34.201562  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:34.206190  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:34.206276  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:34.241598  682462 cri.go:89] found id: ""
	I1027 22:38:34.241633  682462 logs.go:282] 0 containers: []
	W1027 22:38:34.241649  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:34.241659  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:34.241735  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:34.275599  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:34.275618  682462 cri.go:89] found id: "6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:34.275623  682462 cri.go:89] found id: ""
	I1027 22:38:34.275638  682462 logs.go:282] 2 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e]
	I1027 22:38:34.275691  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:34.226824  726897 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:38:34.232920  726897 fix.go:57] duration metric: took 4.914545091s for fixHost
	I1027 22:38:34.232978  726897 start.go:83] releasing machines lock for "no-preload-188814", held for 4.914623118s
	I1027 22:38:34.233058  726897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-188814
	I1027 22:38:34.253412  726897 ssh_runner.go:195] Run: cat /version.json
	I1027 22:38:34.253477  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:34.253486  726897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:38:34.253572  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:34.275492  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:34.275779  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:34.434291  726897 ssh_runner.go:195] Run: systemctl --version
	I1027 22:38:34.442304  726897 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:38:34.487934  726897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:38:34.493498  726897 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:38:34.493574  726897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:38:34.502757  726897 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:38:34.502784  726897 start.go:496] detecting cgroup driver to use...
	I1027 22:38:34.502852  726897 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:38:34.502914  726897 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:38:34.519974  726897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:38:34.533797  726897 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:38:34.533860  726897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:38:34.551298  726897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:38:34.566077  726897 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:38:34.658336  726897 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:38:34.751649  726897 docker.go:234] disabling docker service ...
	I1027 22:38:34.751715  726897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:38:34.767723  726897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:38:34.783258  726897 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:38:34.866426  726897 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:38:34.952086  726897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:38:34.966046  726897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:38:34.981314  726897 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:38:34.981384  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:34.991313  726897 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:38:34.991378  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.001065  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.010726  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.020553  726897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:38:35.029267  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.038769  726897 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.048795  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.058278  726897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:38:35.065972  726897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:38:35.073828  726897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:35.162332  726897 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:38:35.274891  726897 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:38:35.275017  726897 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:38:35.279706  726897 start.go:564] Will wait 60s for crictl version
	I1027 22:38:35.279796  726897 ssh_runner.go:195] Run: which crictl
	I1027 22:38:35.284406  726897 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:38:35.311426  726897 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:38:35.311526  726897 ssh_runner.go:195] Run: crio --version
	I1027 22:38:35.343236  726897 ssh_runner.go:195] Run: crio --version
	I1027 22:38:35.376620  726897 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:38:35.377706  726897 cli_runner.go:164] Run: docker network inspect no-preload-188814 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:38:35.396543  726897 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 22:38:35.401268  726897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:38:35.412909  726897 kubeadm.go:884] updating cluster {Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:38:35.413061  726897 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:38:35.413100  726897 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:38:35.448113  726897 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:38:35.448142  726897 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:38:35.448153  726897 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 22:38:35.448278  726897 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-188814 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:38:35.448363  726897 ssh_runner.go:195] Run: crio config
	I1027 22:38:35.510488  726897 cni.go:84] Creating CNI manager for ""
	I1027 22:38:35.510512  726897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:38:35.510546  726897 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:38:35.510610  726897 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-188814 NodeName:no-preload-188814 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:38:35.510810  726897 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-188814"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:38:35.510913  726897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:38:35.519895  726897 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:38:35.519981  726897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:38:35.528532  726897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 22:38:35.542424  726897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:38:35.556568  726897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1027 22:38:35.570882  726897 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:38:35.575150  726897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:38:35.586468  726897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:35.668886  726897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:38:35.699129  726897 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814 for IP: 192.168.94.2
	I1027 22:38:35.699154  726897 certs.go:195] generating shared ca certs ...
	I1027 22:38:35.699175  726897 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:35.699339  726897 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:38:35.699395  726897 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:38:35.699409  726897 certs.go:257] generating profile certs ...
	I1027 22:38:35.699513  726897 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.key
	I1027 22:38:35.699593  726897 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.key.c506b838
	I1027 22:38:35.699650  726897 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.key
	I1027 22:38:35.699790  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:38:35.699836  726897 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:38:35.699851  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:38:35.699887  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:38:35.699919  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:38:35.699977  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:38:35.700044  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:38:35.700922  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:38:35.722536  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:38:35.744343  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:38:35.767725  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:38:35.798457  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 22:38:35.817990  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:38:35.843082  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:38:35.862167  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:38:35.881635  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:38:35.901160  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:38:35.922116  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:38:35.942874  726897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:38:35.956673  726897 ssh_runner.go:195] Run: openssl version
	I1027 22:38:35.963420  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:38:35.972608  726897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:38:35.976755  726897 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:38:35.976816  726897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:38:36.014377  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:38:36.024514  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:38:36.037057  726897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:38:36.043555  726897 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:38:36.043732  726897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:38:36.085132  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:38:36.094742  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:38:36.104603  726897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:36.109039  726897 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:36.109092  726897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:36.145629  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:38:36.155102  726897 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:38:36.159502  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:38:36.196158  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:38:36.243545  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:38:36.297875  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:38:36.354989  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:38:36.411613  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:38:36.450358  726897 kubeadm.go:401] StartCluster: {Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:38:36.450479  726897 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:38:36.450563  726897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:38:36.492218  726897 cri.go:89] found id: "cb9c2393e547842667e6423cc2d69ddfd9af4a1579d9d9531bc90992a0e1b634"
	I1027 22:38:36.492258  726897 cri.go:89] found id: "221d83fbd903479a3c762233eb12a7ec04e14004807c2ce9ea61f8e212524c54"
	I1027 22:38:36.492264  726897 cri.go:89] found id: "002c10e5f271a370eae7e9ac4bbcfa8188b01c92b6b9cb7d034828d114167209"
	I1027 22:38:36.492268  726897 cri.go:89] found id: "da762329de2a8c6c1610d73b7afd01c216fefae715c921b854c125c03fe0ac85"
	I1027 22:38:36.492272  726897 cri.go:89] found id: ""
	I1027 22:38:36.492324  726897 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:38:36.510731  726897 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:38:36Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:38:36.511257  726897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:38:36.522504  726897 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:38:36.522526  726897 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:38:36.522577  726897 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:38:36.532923  726897 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:38:36.533814  726897 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-188814" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:38:36.534355  726897 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-482142/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-188814" cluster setting kubeconfig missing "no-preload-188814" context setting]
	I1027 22:38:36.535484  726897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:36.537521  726897 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:38:36.548021  726897 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1027 22:38:36.548065  726897 kubeadm.go:602] duration metric: took 25.532571ms to restartPrimaryControlPlane
	I1027 22:38:36.548089  726897 kubeadm.go:403] duration metric: took 97.734505ms to StartCluster
	I1027 22:38:36.548113  726897 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:36.548208  726897 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:38:36.549445  726897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:36.549735  726897 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:38:36.549834  726897 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:38:36.549940  726897 addons.go:69] Setting storage-provisioner=true in profile "no-preload-188814"
	I1027 22:38:36.549953  726897 addons.go:69] Setting dashboard=true in profile "no-preload-188814"
	I1027 22:38:36.549970  726897 addons.go:238] Setting addon dashboard=true in "no-preload-188814"
	I1027 22:38:36.549970  726897 addons.go:238] Setting addon storage-provisioner=true in "no-preload-188814"
	W1027 22:38:36.549979  726897 addons.go:247] addon storage-provisioner should already be in state true
	W1027 22:38:36.549979  726897 addons.go:247] addon dashboard should already be in state true
	I1027 22:38:36.550003  726897 addons.go:69] Setting default-storageclass=true in profile "no-preload-188814"
	I1027 22:38:36.550041  726897 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-188814"
	I1027 22:38:36.550019  726897 host.go:66] Checking if "no-preload-188814" exists ...
	I1027 22:38:36.550019  726897 host.go:66] Checking if "no-preload-188814" exists ...
	I1027 22:38:36.550017  726897 config.go:182] Loaded profile config "no-preload-188814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:36.550424  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:36.550603  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:36.550718  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:36.553472  726897 out.go:179] * Verifying Kubernetes components...
	I1027 22:38:36.554690  726897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:36.585520  726897 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 22:38:36.590116  726897 addons.go:238] Setting addon default-storageclass=true in "no-preload-188814"
	W1027 22:38:36.590140  726897 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:38:36.590175  726897 host.go:66] Checking if "no-preload-188814" exists ...
	I1027 22:38:36.590656  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:36.592985  726897 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 22:38:36.594144  726897 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:38:36.594147  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 22:38:36.594242  726897 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 22:38:36.594307  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:36.595350  726897 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:38:36.595371  726897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:38:36.595426  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:36.621187  726897 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:38:36.621221  726897 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:38:36.621295  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:36.635220  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:36.647998  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:36.666143  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:36.779609  726897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:38:36.781790  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 22:38:36.781816  726897 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 22:38:36.803576  726897 node_ready.go:35] waiting up to 6m0s for node "no-preload-188814" to be "Ready" ...
	I1027 22:38:36.819510  726897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:38:36.819660  726897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:38:36.825241  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 22:38:36.825269  726897 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 22:38:36.887037  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 22:38:36.887070  726897 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 22:38:36.926925  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 22:38:36.926968  726897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 22:38:36.945244  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 22:38:36.945273  726897 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 22:38:36.964026  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 22:38:36.964054  726897 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 22:38:36.985295  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 22:38:36.985329  726897 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 22:38:37.002371  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 22:38:37.002488  726897 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 22:38:37.023396  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:38:37.023428  726897 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 22:38:37.039591  726897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:38:38.703060  726897 node_ready.go:49] node "no-preload-188814" is "Ready"
	I1027 22:38:38.703107  726897 node_ready.go:38] duration metric: took 1.899482355s for node "no-preload-188814" to be "Ready" ...
	I1027 22:38:38.703141  726897 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:38:38.703209  726897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:38:34.280220  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:34.284564  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:34.284644  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:34.315498  682462 cri.go:89] found id: ""
	I1027 22:38:34.315528  682462 logs.go:282] 0 containers: []
	W1027 22:38:34.315537  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:34.315545  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:34.315615  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:34.346061  682462 cri.go:89] found id: ""
	I1027 22:38:34.346090  682462 logs.go:282] 0 containers: []
	W1027 22:38:34.346100  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:34.346130  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:34.346147  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:34.379969  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:34.380007  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:34.402037  682462 logs.go:123] Gathering logs for kube-controller-manager [6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e] ...
	I1027 22:38:34.402076  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:34.436434  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:34.436474  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:34.533827  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:34.533865  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:34.605393  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:34.605441  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:34.605461  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:34.646201  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:34.646241  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:34.716659  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:34.716703  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:34.748528  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:34.748560  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:37.308090  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:37.308709  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:37.308785  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:37.308851  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:37.352401  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:37.352430  682462 cri.go:89] found id: ""
	I1027 22:38:37.352441  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:37.352508  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:37.358406  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:37.358480  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:37.401768  682462 cri.go:89] found id: ""
	I1027 22:38:37.401794  682462 logs.go:282] 0 containers: []
	W1027 22:38:37.401804  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:37.401812  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:37.401867  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:37.442805  682462 cri.go:89] found id: ""
	I1027 22:38:37.442837  682462 logs.go:282] 0 containers: []
	W1027 22:38:37.442849  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:37.442858  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:37.442925  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:37.485292  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:37.485427  682462 cri.go:89] found id: ""
	I1027 22:38:37.485452  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:37.485519  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:37.491539  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:37.491610  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:37.532559  682462 cri.go:89] found id: ""
	I1027 22:38:37.532594  682462 logs.go:282] 0 containers: []
	W1027 22:38:37.532605  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:37.532614  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:37.532676  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:37.577708  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:37.577729  682462 cri.go:89] found id: "6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:37.577732  682462 cri.go:89] found id: ""
	I1027 22:38:37.577740  682462 logs.go:282] 2 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e]
	I1027 22:38:37.577789  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:37.584125  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:37.589205  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:37.589276  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:37.640825  682462 cri.go:89] found id: ""
	I1027 22:38:37.640855  682462 logs.go:282] 0 containers: []
	W1027 22:38:37.640884  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:37.640893  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:37.640981  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:37.686439  682462 cri.go:89] found id: ""
	I1027 22:38:37.686549  682462 logs.go:282] 0 containers: []
	W1027 22:38:37.686560  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:37.686579  682462 logs.go:123] Gathering logs for kube-controller-manager [6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e] ...
	I1027 22:38:37.686604  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:37.735321  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:37.735362  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:37.825453  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:37.825496  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:37.881111  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:37.881152  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:37.928016  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:37.928059  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:37.984481  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:37.984516  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:38.143604  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:38.143650  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:38.182557  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:38.182600  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:38.277080  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:38.277118  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:38.277187  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:39.501932  726897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.682237745s)
	I1027 22:38:39.502025  726897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.682475741s)
	I1027 22:38:39.502155  726897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.462524358s)
	I1027 22:38:39.502186  726897 api_server.go:72] duration metric: took 2.952412975s to wait for apiserver process to appear ...
	I1027 22:38:39.502200  726897 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:38:39.502230  726897 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 22:38:39.503781  726897 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-188814 addons enable metrics-server
	
	I1027 22:38:39.507212  726897 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:38:39.507242  726897 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:38:39.510726  726897 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 22:38:42.322809  724915 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:38:42.322861  724915 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:38:42.322964  724915 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:38:42.323036  724915 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:38:42.323068  724915 kubeadm.go:319] OS: Linux
	I1027 22:38:42.323177  724915 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:38:42.323277  724915 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:38:42.323346  724915 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:38:42.323435  724915 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:38:42.323518  724915 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:38:42.323563  724915 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:38:42.323611  724915 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:38:42.323650  724915 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:38:42.323725  724915 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:38:42.323812  724915 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:38:42.323908  724915 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:38:42.324008  724915 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:38:42.325210  724915 out.go:252]   - Generating certificates and keys ...
	I1027 22:38:42.325301  724915 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:38:42.325363  724915 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:38:42.325449  724915 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:38:42.325543  724915 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:38:42.325646  724915 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:38:42.325741  724915 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:38:42.325854  724915 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:38:42.326006  724915 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-829976 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 22:38:42.326087  724915 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:38:42.326251  724915 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-829976 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 22:38:42.326353  724915 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:38:42.326444  724915 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:38:42.326529  724915 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:38:42.326637  724915 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:38:42.326716  724915 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:38:42.326798  724915 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:38:42.326887  724915 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:38:42.327009  724915 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:38:42.327083  724915 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:38:42.327210  724915 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:38:42.327325  724915 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:38:42.328722  724915 out.go:252]   - Booting up control plane ...
	I1027 22:38:42.328803  724915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:38:42.328870  724915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:38:42.328924  724915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:38:42.329042  724915 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:38:42.329154  724915 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:38:42.329279  724915 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:38:42.329363  724915 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:38:42.329412  724915 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:38:42.329535  724915 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:38:42.329631  724915 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:38:42.329680  724915 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.026085ms
	I1027 22:38:42.329779  724915 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:38:42.329870  724915 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1027 22:38:42.329970  724915 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:38:42.330048  724915 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:38:42.330120  724915 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.705234911s
	I1027 22:38:42.330179  724915 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.91005687s
	I1027 22:38:42.330241  724915 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502523761s
	I1027 22:38:42.330334  724915 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:38:42.330459  724915 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:38:42.330529  724915 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:38:42.330762  724915 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-829976 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:38:42.330846  724915 kubeadm.go:319] [bootstrap-token] Using token: ra0n2j.d96j3y85d2xm2zyd
	I1027 22:38:42.332220  724915 out.go:252]   - Configuring RBAC rules ...
	I1027 22:38:42.332342  724915 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:38:42.332447  724915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:38:42.332652  724915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:38:42.332838  724915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:38:42.333022  724915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:38:42.333154  724915 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:38:42.333293  724915 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:38:42.333354  724915 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:38:42.333408  724915 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:38:42.333418  724915 kubeadm.go:319] 
	I1027 22:38:42.333510  724915 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:38:42.333519  724915 kubeadm.go:319] 
	I1027 22:38:42.333606  724915 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:38:42.333616  724915 kubeadm.go:319] 
	I1027 22:38:42.333665  724915 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:38:42.333750  724915 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:38:42.333826  724915 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:38:42.333835  724915 kubeadm.go:319] 
	I1027 22:38:42.333899  724915 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:38:42.333906  724915 kubeadm.go:319] 
	I1027 22:38:42.333977  724915 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:38:42.333987  724915 kubeadm.go:319] 
	I1027 22:38:42.334035  724915 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:38:42.334104  724915 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:38:42.334167  724915 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:38:42.334173  724915 kubeadm.go:319] 
	I1027 22:38:42.334262  724915 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:38:42.334349  724915 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:38:42.334356  724915 kubeadm.go:319] 
	I1027 22:38:42.334494  724915 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ra0n2j.d96j3y85d2xm2zyd \
	I1027 22:38:42.334645  724915 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d \
	I1027 22:38:42.334678  724915 kubeadm.go:319] 	--control-plane 
	I1027 22:38:42.334687  724915 kubeadm.go:319] 
	I1027 22:38:42.334793  724915 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:38:42.334801  724915 kubeadm.go:319] 
	I1027 22:38:42.334885  724915 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ra0n2j.d96j3y85d2xm2zyd \
	I1027 22:38:42.335084  724915 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d 
	I1027 22:38:42.335101  724915 cni.go:84] Creating CNI manager for ""
	I1027 22:38:42.335113  724915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:38:42.336553  724915 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:38:42.337614  724915 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:38:42.342664  724915 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 22:38:42.342687  724915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:38:42.357415  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:38:42.590246  724915 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:38:42.590350  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:42.590370  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-829976 minikube.k8s.io/updated_at=2025_10_27T22_38_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=embed-certs-829976 minikube.k8s.io/primary=true
	I1027 22:38:42.601266  724915 ops.go:34] apiserver oom_adj: -16
	I1027 22:38:42.682810  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:43.183354  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:43.683665  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:39.512033  726897 addons.go:514] duration metric: took 2.962204357s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 22:38:40.003099  726897 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 22:38:40.007383  726897 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1027 22:38:40.008287  726897 api_server.go:141] control plane version: v1.34.1
	I1027 22:38:40.008312  726897 api_server.go:131] duration metric: took 506.105489ms to wait for apiserver health ...
	I1027 22:38:40.008322  726897 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:38:40.011730  726897 system_pods.go:59] 8 kube-system pods found
	I1027 22:38:40.011760  726897 system_pods.go:61] "coredns-66bc5c9577-m8lfc" [486551a5-b1eb-4fb1-8f1e-ba4a945a2791] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:38:40.011767  726897 system_pods.go:61] "etcd-no-preload-188814" [793ec55b-c1aa-483b-b315-3e75a21d71d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:38:40.011777  726897 system_pods.go:61] "kindnet-thlc6" [9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9] Running
	I1027 22:38:40.011783  726897 system_pods.go:61] "kube-apiserver-no-preload-188814" [572f9081-8ed9-4e69-8d77-0475bcae35b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:38:40.011791  726897 system_pods.go:61] "kube-controller-manager-no-preload-188814" [f2669c26-b7c4-4d32-8dc0-6ef7e15dea21] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:38:40.011796  726897 system_pods.go:61] "kube-proxy-4nwvc" [a82e59ec-7ef7-46aa-a9d3-64a1f8af2222] Running
	I1027 22:38:40.011803  726897 system_pods.go:61] "kube-scheduler-no-preload-188814" [012078bb-8e72-4b64-b7a4-48f33c1a1092] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:38:40.011809  726897 system_pods.go:61] "storage-provisioner" [9bd12118-14fd-4ef6-a0f1-dd7130601f49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:38:40.011817  726897 system_pods.go:74] duration metric: took 3.489312ms to wait for pod list to return data ...
	I1027 22:38:40.011827  726897 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:38:40.013909  726897 default_sa.go:45] found service account: "default"
	I1027 22:38:40.013928  726897 default_sa.go:55] duration metric: took 2.09243ms for default service account to be created ...
	I1027 22:38:40.013938  726897 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:38:40.016626  726897 system_pods.go:86] 8 kube-system pods found
	I1027 22:38:40.016657  726897 system_pods.go:89] "coredns-66bc5c9577-m8lfc" [486551a5-b1eb-4fb1-8f1e-ba4a945a2791] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:38:40.016673  726897 system_pods.go:89] "etcd-no-preload-188814" [793ec55b-c1aa-483b-b315-3e75a21d71d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:38:40.016682  726897 system_pods.go:89] "kindnet-thlc6" [9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9] Running
	I1027 22:38:40.016692  726897 system_pods.go:89] "kube-apiserver-no-preload-188814" [572f9081-8ed9-4e69-8d77-0475bcae35b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:38:40.016705  726897 system_pods.go:89] "kube-controller-manager-no-preload-188814" [f2669c26-b7c4-4d32-8dc0-6ef7e15dea21] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:38:40.016711  726897 system_pods.go:89] "kube-proxy-4nwvc" [a82e59ec-7ef7-46aa-a9d3-64a1f8af2222] Running
	I1027 22:38:40.016720  726897 system_pods.go:89] "kube-scheduler-no-preload-188814" [012078bb-8e72-4b64-b7a4-48f33c1a1092] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:38:40.016728  726897 system_pods.go:89] "storage-provisioner" [9bd12118-14fd-4ef6-a0f1-dd7130601f49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:38:40.016740  726897 system_pods.go:126] duration metric: took 2.768995ms to wait for k8s-apps to be running ...
	I1027 22:38:40.016752  726897 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:38:40.016806  726897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:38:40.030591  726897 system_svc.go:56] duration metric: took 13.825821ms WaitForService to wait for kubelet
	I1027 22:38:40.030622  726897 kubeadm.go:587] duration metric: took 3.48085182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:38:40.030642  726897 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:38:40.033599  726897 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:38:40.033633  726897 node_conditions.go:123] node cpu capacity is 8
	I1027 22:38:40.033647  726897 node_conditions.go:105] duration metric: took 3.000721ms to run NodePressure ...
	I1027 22:38:40.033659  726897 start.go:242] waiting for startup goroutines ...
	I1027 22:38:40.033666  726897 start.go:247] waiting for cluster config update ...
	I1027 22:38:40.033676  726897 start.go:256] writing updated cluster config ...
	I1027 22:38:40.033995  726897 ssh_runner.go:195] Run: rm -f paused
	I1027 22:38:40.038455  726897 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:38:40.041959  726897 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m8lfc" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 22:38:42.047766  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	W1027 22:38:44.048612  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	I1027 22:38:40.866028  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:40.866510  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:40.866568  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:40.866630  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:40.901261  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:40.901288  682462 cri.go:89] found id: ""
	I1027 22:38:40.901300  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:40.901364  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:40.906638  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:40.906721  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:40.947742  682462 cri.go:89] found id: ""
	I1027 22:38:40.947774  682462 logs.go:282] 0 containers: []
	W1027 22:38:40.947785  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:40.947793  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:40.947863  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:40.988406  682462 cri.go:89] found id: ""
	I1027 22:38:40.988437  682462 logs.go:282] 0 containers: []
	W1027 22:38:40.988449  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:40.988457  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:40.988524  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:41.021368  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:41.021393  682462 cri.go:89] found id: ""
	I1027 22:38:41.021403  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:41.021461  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:41.026168  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:41.026259  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:41.057538  682462 cri.go:89] found id: ""
	I1027 22:38:41.057569  682462 logs.go:282] 0 containers: []
	W1027 22:38:41.057583  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:41.057592  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:41.057652  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:41.088001  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:41.088025  682462 cri.go:89] found id: ""
	I1027 22:38:41.088034  682462 logs.go:282] 1 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387]
	I1027 22:38:41.088086  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:41.092957  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:41.093049  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:41.124700  682462 cri.go:89] found id: ""
	I1027 22:38:41.124733  682462 logs.go:282] 0 containers: []
	W1027 22:38:41.124746  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:41.124755  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:41.124815  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:41.156323  682462 cri.go:89] found id: ""
	I1027 22:38:41.156356  682462 logs.go:282] 0 containers: []
	W1027 22:38:41.156368  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:41.156382  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:41.156402  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:41.213504  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:41.213548  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:41.244636  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:41.244671  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:41.312449  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:41.312494  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:41.355757  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:41.355788  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:41.462972  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:41.463015  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:41.484161  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:41.484207  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:41.557889  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:41.557919  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:41.557937  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:44.111024  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:44.111534  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:44.111594  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:44.111656  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:44.148703  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:44.148734  682462 cri.go:89] found id: ""
	I1027 22:38:44.148745  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:44.148808  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:44.153837  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:44.153904  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:44.188073  682462 cri.go:89] found id: ""
	I1027 22:38:44.188103  682462 logs.go:282] 0 containers: []
	W1027 22:38:44.188114  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:44.188122  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:44.188184  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:44.220474  682462 cri.go:89] found id: ""
	I1027 22:38:44.220505  682462 logs.go:282] 0 containers: []
	W1027 22:38:44.220518  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:44.220526  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:44.220584  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:44.258910  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:44.258935  682462 cri.go:89] found id: ""
	I1027 22:38:44.258959  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:44.259020  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:44.264353  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:44.264429  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	
	
	==> CRI-O <==
	Oct 27 22:38:12 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:12.32847864Z" level=info msg="Starting container: 05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b" id=b9f3b3b1-8577-47d6-aed9-e1c4c0ec6c1c name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:38:12 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:12.330924696Z" level=info msg="Started container" PID=1681 containerID=05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper id=b9f3b3b1-8577-47d6-aed9-e1c4c0ec6c1c name=/runtime.v1.RuntimeService/StartContainer sandboxID=67146ae18a8a1c99ac76ad9623adf2e88ddbb0590ad2168089e93ddfc353fde6
	Oct 27 22:38:13 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:13.214390958Z" level=info msg="Removing container: 9d5701af5dda9039d363675238269d9c7e24efd0a054f94c4cf7e85901485224" id=b4c34d35-2c7b-4480-8a57-2fbdeb34d217 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:38:13 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:13.224293317Z" level=info msg="Removed container 9d5701af5dda9039d363675238269d9c7e24efd0a054f94c4cf7e85901485224: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper" id=b4c34d35-2c7b-4480-8a57-2fbdeb34d217 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.076533961Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=5511dbdc-c73d-4bf1-a3a5-8d4824425635 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.077474907Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8f46648f-37a4-4f29-ba0b-02ef9d5ebcd5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.079153213Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8/kubernetes-dashboard" id=2498c4a7-07fc-48f0-a3c6-ef53c4c2fec9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.079300788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.083438346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.083606403Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9f478a5fbf3259f63e2ab66b49daddd160d69ebe8f788f9d9be07388d3e85acc/merged/etc/group: no such file or directory"
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.083901606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.11219391Z" level=info msg="Created container 3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8/kubernetes-dashboard" id=2498c4a7-07fc-48f0-a3c6-ef53c4c2fec9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.112740883Z" level=info msg="Starting container: 3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c" id=20af0a11-bcc0-47a1-9e33-969379825c81 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.11438879Z" level=info msg="Started container" PID=1732 containerID=3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8/kubernetes-dashboard id=20af0a11-bcc0-47a1-9e33-969379825c81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1573afae1b7cb5e7e910e764107bf01746e149299824c8daf7b3acb03eddef26
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.145163724Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f44c29da-bd29-4e43-80b7-f11c7886f06a name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.146159593Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bb00c238-b3c0-4503-af96-5ed744489653 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.147294481Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper" id=025c1bc2-7c2a-4d6c-84ff-7ce2972e177d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.147512017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.154611097Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.155219147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.180655507Z" level=info msg="Created container f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper" id=025c1bc2-7c2a-4d6c-84ff-7ce2972e177d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.181389729Z" level=info msg="Starting container: f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc" id=3d06e330-fb79-4807-a400-100420ad8832 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.183415712Z" level=info msg="Started container" PID=1758 containerID=f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper id=3d06e330-fb79-4807-a400-100420ad8832 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67146ae18a8a1c99ac76ad9623adf2e88ddbb0590ad2168089e93ddfc353fde6
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.261897267Z" level=info msg="Removing container: 05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b" id=e4e01079-117e-4031-b5c0-2547ddefb0e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.274506774Z" level=info msg="Removed container 05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper" id=e4e01079-117e-4031-b5c0-2547ddefb0e3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f38488d38d5e2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   67146ae18a8a1       dashboard-metrics-scraper-5f989dc9cf-wbww8       kubernetes-dashboard
	3df1b82967aac       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   1573afae1b7cb       kubernetes-dashboard-8694d4445c-n7dg8            kubernetes-dashboard
	a5bda7727c540       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Running             storage-provisioner         1                   5759157ef8ca9       storage-provisioner                              kube-system
	21430bbf8df99       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     0                   f27deefa19e93       coredns-5dd5756b68-jwp99                         kube-system
	27f9c91f932c4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   eda6be1b010df       busybox                                          default
	ec59b02f91c0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   5759157ef8ca9       storage-provisioner                              kube-system
	40b5ad6840c82       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   21b149acc96df       kindnet-v6dh4                                    kube-system
	f4690cc69163d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           50 seconds ago      Running             kube-proxy                  0                   e06515105471f       kube-proxy-srms5                                 kube-system
	6cdce94a5f78b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   0467483b7743d       kube-controller-manager-old-k8s-version-908589   kube-system
	e64b44ab53a02       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   366ef0e44ad0f       kube-apiserver-old-k8s-version-908589            kube-system
	0552ed0e96ff6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   cccb7c031d152       etcd-old-k8s-version-908589                      kube-system
	e61d7b54f2b00       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   2b22f7f853a64       kube-scheduler-old-k8s-version-908589            kube-system
	
	
	==> coredns [21430bbf8df99df3b9a23d0e6400e2be25bca17ae542da44b69472a011a78162] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43721 - 10261 "HINFO IN 6022255000218777880.585288113833361241. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.03101895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-908589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-908589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=old-k8s-version-908589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_36_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-908589
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:38:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:38:26 +0000   Mon, 27 Oct 2025 22:36:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:38:26 +0000   Mon, 27 Oct 2025 22:36:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:38:26 +0000   Mon, 27 Oct 2025 22:36:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:38:26 +0000   Mon, 27 Oct 2025 22:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-908589
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                150d46e9-6742-4ab0-adb7-789e26ecfc2c
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-jwp99                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-old-k8s-version-908589                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m9s
	  kube-system                 kindnet-v6dh4                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-908589             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-old-k8s-version-908589    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-srms5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-908589             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-wbww8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-n7dg8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 114s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 2m7s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s               kubelet          Node old-k8s-version-908589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s               kubelet          Node old-k8s-version-908589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s               kubelet          Node old-k8s-version-908589 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           115s               node-controller  Node old-k8s-version-908589 event: Registered Node old-k8s-version-908589 in Controller
	  Normal  NodeReady                101s               kubelet          Node old-k8s-version-908589 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-908589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-908589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node old-k8s-version-908589 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node old-k8s-version-908589 event: Registered Node old-k8s-version-908589 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [0552ed0e96ff667dac3ef7da44469e9aecf41285625ff22fbc94d09f10ebe42a] <==
	{"level":"info","ts":"2025-10-27T22:37:53.698796Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-27T22:37:53.699034Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T22:37:53.699113Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T22:37:53.701698Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T22:37:53.701977Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T22:37:53.702019Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T22:37:53.702187Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-27T22:37:53.702302Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-27T22:37:55.591699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-27T22:37:55.591745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-27T22:37:55.591773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-27T22:37:55.591788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-27T22:37:55.591794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-27T22:37:55.591802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-27T22:37:55.591809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-27T22:37:55.592775Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-908589 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T22:37:55.592792Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T22:37:55.592814Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T22:37:55.593048Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T22:37:55.593072Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-27T22:37:55.595069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-27T22:37:55.595098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-10-27T22:38:12.392625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.489449ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789622911278499 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:571 > success:<request_put:<key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:648 >> failure:<request_range:<key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T22:38:12.39296Z","caller":"traceutil/trace.go:171","msg":"trace[2074332013] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"156.820858ms","start":"2025-10-27T22:38:12.236092Z","end":"2025-10-27T22:38:12.392913Z","steps":["trace[2074332013] 'process raft request'  (duration: 48.578294ms)","trace[2074332013] 'compare'  (duration: 107.390141ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:38:12.393265Z","caller":"traceutil/trace.go:171","msg":"trace[347201373] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"156.32995ms","start":"2025-10-27T22:38:12.236919Z","end":"2025-10-27T22:38:12.393249Z","steps":["trace[347201373] 'process raft request'  (duration: 155.821799ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:38:48 up  2:21,  0 user,  load average: 3.96, 2.61, 2.71
	Linux old-k8s-version-908589 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40b5ad6840c82eefedf9a6e76bbd8c07fa3d649ed396affb792017c3f80126e6] <==
	I1027 22:37:57.729608       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:37:57.729884       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1027 22:37:57.730090       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:37:57.730113       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:37:57.730140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:37:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:37:57.837657       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:37:57.837690       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:37:57.837701       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:37:57.837883       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:37:58.230054       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:37:58.230601       1 metrics.go:72] Registering metrics
	I1027 22:37:58.230680       1 controller.go:711] "Syncing nftables rules"
	I1027 22:38:07.838546       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:38:07.838590       1 main.go:301] handling current node
	I1027 22:38:17.838715       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:38:17.838756       1 main.go:301] handling current node
	I1027 22:38:27.837607       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:38:27.837644       1 main.go:301] handling current node
	I1027 22:38:37.843025       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:38:37.843068       1 main.go:301] handling current node
	I1027 22:38:47.844395       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:38:47.844446       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e64b44ab53a02f28c14e5582dc7be12f197b4831f11356e8d5c51aa28e9eff8e] <==
	I1027 22:37:56.512813       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:37:56.515981       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1027 22:37:56.563338       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1027 22:37:56.563401       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1027 22:37:56.563435       1 aggregator.go:166] initial CRD sync complete...
	I1027 22:37:56.563445       1 autoregister_controller.go:141] Starting autoregister controller
	I1027 22:37:56.563453       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:37:56.563462       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:37:56.563726       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 22:37:56.564642       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1027 22:37:56.564670       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1027 22:37:56.571662       1 shared_informer.go:318] Caches are synced for configmaps
	I1027 22:37:56.573805       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1027 22:37:57.364308       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 22:37:57.391005       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 22:37:57.405645       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:37:57.412212       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:37:57.417925       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 22:37:57.455093       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.184.86"}
	I1027 22:37:57.473159       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:37:57.474510       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.98.230"}
	I1027 22:38:09.180429       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1027 22:38:09.532362       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:38:09.532359       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:38:09.631304       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6cdce94a5f78b08c7fa45e7720dfbf6930fe756536de03ceb5d36d0124ee1c23] <==
	I1027 22:38:09.437805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="368.344218ms"
	I1027 22:38:09.437987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.039µs"
	I1027 22:38:09.439463       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-n7dg8"
	I1027 22:38:09.439611       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-wbww8"
	I1027 22:38:09.444683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="260.558473ms"
	I1027 22:38:09.446762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="261.857655ms"
	I1027 22:38:09.451557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.685029ms"
	I1027 22:38:09.451645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.221µs"
	I1027 22:38:09.456183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.397848ms"
	I1027 22:38:09.456279       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.895µs"
	I1027 22:38:09.458007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.536µs"
	I1027 22:38:09.465612       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.238µs"
	I1027 22:38:09.535717       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1027 22:38:09.656110       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 22:38:09.728504       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 22:38:09.728547       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 22:38:12.233546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="102.096µs"
	I1027 22:38:13.224340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.005µs"
	I1027 22:38:14.229109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.477µs"
	I1027 22:38:16.239866       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.608104ms"
	I1027 22:38:16.239982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="70.252µs"
	I1027 22:38:30.105646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.939922ms"
	I1027 22:38:30.105803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.614µs"
	I1027 22:38:31.271551       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.652µs"
	I1027 22:38:39.760724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="113.006µs"
	
	
	==> kube-proxy [f4690cc69163d663fdab691358519ee0401aa190792f240348c12d39a643e5f5] <==
	I1027 22:37:57.537615       1 server_others.go:69] "Using iptables proxy"
	I1027 22:37:57.546576       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1027 22:37:57.566328       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:37:57.569027       1 server_others.go:152] "Using iptables Proxier"
	I1027 22:37:57.569055       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 22:37:57.569061       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 22:37:57.569095       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 22:37:57.569305       1 server.go:846] "Version info" version="v1.28.0"
	I1027 22:37:57.569322       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:37:57.570457       1 config.go:188] "Starting service config controller"
	I1027 22:37:57.570499       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 22:37:57.570534       1 config.go:315] "Starting node config controller"
	I1027 22:37:57.570542       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 22:37:57.570564       1 config.go:97] "Starting endpoint slice config controller"
	I1027 22:37:57.570588       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 22:37:57.671481       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 22:37:57.671609       1 shared_informer.go:318] Caches are synced for service config
	I1027 22:37:57.671606       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e61d7b54f2b00d9f3cc449906592240dfddbc082a601333546e64cbf3aab5c08] <==
	I1027 22:37:53.986671       1 serving.go:348] Generated self-signed cert in-memory
	W1027 22:37:56.499241       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:37:56.499277       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:37:56.499290       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:37:56.499299       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:37:56.525321       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1027 22:37:56.525351       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:37:56.528121       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:37:56.528178       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1027 22:37:56.528534       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1027 22:37:56.528643       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1027 22:37:56.628422       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 22:38:09 old-k8s-version-908589 kubelet[728]: I1027 22:38:09.445904     728 topology_manager.go:215] "Topology Admit Handler" podUID="350e6819-9685-4f35-baab-0b7e8df8513a" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-n7dg8"
	Oct 27 22:38:09 old-k8s-version-908589 kubelet[728]: I1027 22:38:09.574685     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/350e6819-9685-4f35-baab-0b7e8df8513a-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-n7dg8\" (UID: \"350e6819-9685-4f35-baab-0b7e8df8513a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8"
	Oct 27 22:38:09 old-k8s-version-908589 kubelet[728]: I1027 22:38:09.574936     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfx4c\" (UniqueName: \"kubernetes.io/projected/350e6819-9685-4f35-baab-0b7e8df8513a-kube-api-access-dfx4c\") pod \"kubernetes-dashboard-8694d4445c-n7dg8\" (UID: \"350e6819-9685-4f35-baab-0b7e8df8513a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8"
	Oct 27 22:38:09 old-k8s-version-908589 kubelet[728]: I1027 22:38:09.575069     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/02a43cb5-35de-48c4-a04f-de7368d3b206-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-wbww8\" (UID: \"02a43cb5-35de-48c4-a04f-de7368d3b206\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8"
	Oct 27 22:38:09 old-k8s-version-908589 kubelet[728]: I1027 22:38:09.575102     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xp2s\" (UniqueName: \"kubernetes.io/projected/02a43cb5-35de-48c4-a04f-de7368d3b206-kube-api-access-8xp2s\") pod \"dashboard-metrics-scraper-5f989dc9cf-wbww8\" (UID: \"02a43cb5-35de-48c4-a04f-de7368d3b206\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8"
	Oct 27 22:38:12 old-k8s-version-908589 kubelet[728]: I1027 22:38:12.208167     728 scope.go:117] "RemoveContainer" containerID="9d5701af5dda9039d363675238269d9c7e24efd0a054f94c4cf7e85901485224"
	Oct 27 22:38:13 old-k8s-version-908589 kubelet[728]: I1027 22:38:13.213030     728 scope.go:117] "RemoveContainer" containerID="9d5701af5dda9039d363675238269d9c7e24efd0a054f94c4cf7e85901485224"
	Oct 27 22:38:13 old-k8s-version-908589 kubelet[728]: I1027 22:38:13.213239     728 scope.go:117] "RemoveContainer" containerID="05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b"
	Oct 27 22:38:13 old-k8s-version-908589 kubelet[728]: E1027 22:38:13.213649     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wbww8_kubernetes-dashboard(02a43cb5-35de-48c4-a04f-de7368d3b206)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8" podUID="02a43cb5-35de-48c4-a04f-de7368d3b206"
	Oct 27 22:38:14 old-k8s-version-908589 kubelet[728]: I1027 22:38:14.217227     728 scope.go:117] "RemoveContainer" containerID="05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b"
	Oct 27 22:38:14 old-k8s-version-908589 kubelet[728]: E1027 22:38:14.217628     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wbww8_kubernetes-dashboard(02a43cb5-35de-48c4-a04f-de7368d3b206)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8" podUID="02a43cb5-35de-48c4-a04f-de7368d3b206"
	Oct 27 22:38:16 old-k8s-version-908589 kubelet[728]: I1027 22:38:16.234498     728 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8" podStartSLOduration=0.933316773 podCreationTimestamp="2025-10-27 22:38:09 +0000 UTC" firstStartedPulling="2025-10-27 22:38:09.775772151 +0000 UTC m=+16.720768255" lastFinishedPulling="2025-10-27 22:38:16.076892136 +0000 UTC m=+23.021888252" observedRunningTime="2025-10-27 22:38:16.234042405 +0000 UTC m=+23.179038530" watchObservedRunningTime="2025-10-27 22:38:16.23443677 +0000 UTC m=+23.179432894"
	Oct 27 22:38:19 old-k8s-version-908589 kubelet[728]: I1027 22:38:19.747832     728 scope.go:117] "RemoveContainer" containerID="05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b"
	Oct 27 22:38:19 old-k8s-version-908589 kubelet[728]: E1027 22:38:19.748280     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wbww8_kubernetes-dashboard(02a43cb5-35de-48c4-a04f-de7368d3b206)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8" podUID="02a43cb5-35de-48c4-a04f-de7368d3b206"
	Oct 27 22:38:31 old-k8s-version-908589 kubelet[728]: I1027 22:38:31.144365     728 scope.go:117] "RemoveContainer" containerID="05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b"
	Oct 27 22:38:31 old-k8s-version-908589 kubelet[728]: I1027 22:38:31.260557     728 scope.go:117] "RemoveContainer" containerID="05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b"
	Oct 27 22:38:31 old-k8s-version-908589 kubelet[728]: I1027 22:38:31.260875     728 scope.go:117] "RemoveContainer" containerID="f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc"
	Oct 27 22:38:31 old-k8s-version-908589 kubelet[728]: E1027 22:38:31.261288     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wbww8_kubernetes-dashboard(02a43cb5-35de-48c4-a04f-de7368d3b206)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8" podUID="02a43cb5-35de-48c4-a04f-de7368d3b206"
	Oct 27 22:38:39 old-k8s-version-908589 kubelet[728]: I1027 22:38:39.748316     728 scope.go:117] "RemoveContainer" containerID="f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc"
	Oct 27 22:38:39 old-k8s-version-908589 kubelet[728]: E1027 22:38:39.748748     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wbww8_kubernetes-dashboard(02a43cb5-35de-48c4-a04f-de7368d3b206)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8" podUID="02a43cb5-35de-48c4-a04f-de7368d3b206"
	Oct 27 22:38:45 old-k8s-version-908589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:38:45 old-k8s-version-908589 kubelet[728]: I1027 22:38:45.200773     728 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 27 22:38:45 old-k8s-version-908589 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:38:45 old-k8s-version-908589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 22:38:45 old-k8s-version-908589 systemd[1]: kubelet.service: Consumed 1.478s CPU time.
	
	
	==> kubernetes-dashboard [3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c] <==
	2025/10/27 22:38:16 Using namespace: kubernetes-dashboard
	2025/10/27 22:38:16 Using in-cluster config to connect to apiserver
	2025/10/27 22:38:16 Using secret token for csrf signing
	2025/10/27 22:38:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 22:38:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 22:38:16 Successful initial request to the apiserver, version: v1.28.0
	2025/10/27 22:38:16 Generating JWE encryption key
	2025/10/27 22:38:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 22:38:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 22:38:16 Initializing JWE encryption key from synchronized object
	2025/10/27 22:38:16 Creating in-cluster Sidecar client
	2025/10/27 22:38:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:38:16 Serving insecurely on HTTP port: 9090
	2025/10/27 22:38:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:38:16 Starting overwatch
	
	
	==> storage-provisioner [a5bda7727c540811b4409b8ecc67d9d385823f5aa5de84580883039c1baf1935] <==
	I1027 22:37:58.221975       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 22:37:58.231637       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 22:37:58.231681       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 22:38:15.627689       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 22:38:15.627839       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908589_9e22608c-affc-4f7b-8268-ee3ea6c992f9!
	I1027 22:38:15.627806       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a0abcea3-4af1-407b-918c-156849108be7", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-908589_9e22608c-affc-4f7b-8268-ee3ea6c992f9 became leader
	I1027 22:38:15.728104       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908589_9e22608c-affc-4f7b-8268-ee3ea6c992f9!
	
	
	==> storage-provisioner [ec59b02f91c0b8777c448403a25b84492d518f669cf7e6d1d62914de1ae6d861] <==
	I1027 22:37:57.517049       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 22:37:57.518989       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908589 -n old-k8s-version-908589
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908589 -n old-k8s-version-908589: exit status 2 (425.413744ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-908589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-908589
helpers_test.go:243: (dbg) docker inspect old-k8s-version-908589:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b",
	        "Created": "2025-10-27T22:36:26.560709331Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 718905,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:37:47.062330833Z",
	            "FinishedAt": "2025-10-27T22:37:45.948442115Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/hosts",
	        "LogPath": "/var/lib/docker/containers/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b/2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b-json.log",
	        "Name": "/old-k8s-version-908589",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-908589:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-908589",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d571bec60f7417d280af039aa2e4faf726c967779fa6c68ec9eca2bcb61547b",
	                "LowerDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f0254d0f78d45ae5272167dc28461f7cf1fb17de391a1e1a5f9214d32874526/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-908589",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-908589/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-908589",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-908589",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-908589",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b247857440003b4f72b816ca9c6393459d0a2cb7e49a4cc53fe57e2a90f88f0f",
	            "SandboxKey": "/var/run/docker/netns/b24785744000",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-908589": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:0a:72:ff:b2:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "798a573c50beee8a1800510c05b8fefb38677fa31ecba8e611494c61259bbf2b",
	                    "EndpointID": "fa72012937bdd81b6e188415f8b09ba348d8cabc0b02e0c5a1b483db80dd873a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-908589",
	                        "2d571bec60f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908589 -n old-k8s-version-908589
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908589 -n old-k8s-version-908589: exit status 2 (406.17536ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-908589 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-908589 logs -n 25: (1.395957717s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-293335 sudo crio config                                                                                                                                                                                                             │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ delete  │ -p cilium-293335                                                                                                                                                                                                                              │ cilium-293335          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ delete  │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ ssh     │ -p NoKubernetes-565903 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ stop    │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-565903 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:37 UTC │
	│ ssh     │ -p NoKubernetes-565903 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ delete  │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903    │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-908589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ stop    │ -p old-k8s-version-908589 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-908589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-188814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ start   │ -p cert-expiration-219241 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-219241 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ stop    │ -p no-preload-188814 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p cert-expiration-219241                                                                                                                                                                                                                     │ cert-expiration-219241 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976     │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-188814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814      │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ image   │ old-k8s-version-908589 image list --format=json                                                                                                                                                                                               │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ pause   │ -p old-k8s-version-908589 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-908589 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:38:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:38:29.130543  726897 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:38:29.130850  726897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:38:29.130862  726897 out.go:374] Setting ErrFile to fd 2...
	I1027 22:38:29.130868  726897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:38:29.131127  726897 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:38:29.131644  726897 out.go:368] Setting JSON to false
	I1027 22:38:29.132745  726897 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8448,"bootTime":1761596261,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:38:29.132843  726897 start.go:143] virtualization: kvm guest
	I1027 22:38:29.134751  726897 out.go:179] * [no-preload-188814] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:38:29.135954  726897 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:38:29.135994  726897 notify.go:221] Checking for updates...
	I1027 22:38:29.137997  726897 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:38:29.139392  726897 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:38:29.141002  726897 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:38:29.142124  726897 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:38:29.143198  726897 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:38:29.144599  726897 config.go:182] Loaded profile config "no-preload-188814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:29.145315  726897 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:38:29.168555  726897 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:38:29.168639  726897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:38:29.227225  726897 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:38:29.216838075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:38:29.227381  726897 docker.go:318] overlay module found
	I1027 22:38:29.229119  726897 out.go:179] * Using the docker driver based on existing profile
	I1027 22:38:25.368790  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:27.727795  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:41328->192.168.76.2:8443: read: connection reset by peer
	I1027 22:38:27.727869  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:27.727924  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:27.757322  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:27.757346  682462 cri.go:89] found id: "c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe"
	I1027 22:38:27.757352  682462 cri.go:89] found id: ""
	I1027 22:38:27.757362  682462 logs.go:282] 2 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810 c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe]
	I1027 22:38:27.757408  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:27.761254  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:27.765308  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:27.765363  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:27.791839  682462 cri.go:89] found id: ""
	I1027 22:38:27.791864  682462 logs.go:282] 0 containers: []
	W1027 22:38:27.791872  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:27.791878  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:27.791929  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:27.819717  682462 cri.go:89] found id: ""
	I1027 22:38:27.819742  682462 logs.go:282] 0 containers: []
	W1027 22:38:27.819750  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:27.819756  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:27.819803  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:27.846197  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:27.846228  682462 cri.go:89] found id: ""
	I1027 22:38:27.846238  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:27.846290  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:27.850217  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:27.850280  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:27.877936  682462 cri.go:89] found id: ""
	I1027 22:38:27.877986  682462 logs.go:282] 0 containers: []
	W1027 22:38:27.877995  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:27.878001  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:27.878066  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:27.904709  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:27.904729  682462 cri.go:89] found id: "6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:27.904734  682462 cri.go:89] found id: ""
	I1027 22:38:27.904742  682462 logs.go:282] 2 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e]
	I1027 22:38:27.904794  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:27.908890  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:27.913882  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:27.913996  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:27.941560  682462 cri.go:89] found id: ""
	I1027 22:38:27.941582  682462 logs.go:282] 0 containers: []
	W1027 22:38:27.941589  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:27.941595  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:27.941650  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:27.968903  682462 cri.go:89] found id: ""
	I1027 22:38:27.968930  682462 logs.go:282] 0 containers: []
	W1027 22:38:27.968952  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:27.968978  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:27.968998  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:27.999830  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:27.999862  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:28.018932  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:28.018977  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:28.055565  682462 logs.go:123] Gathering logs for kube-controller-manager [6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e] ...
	I1027 22:38:28.055595  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:28.083081  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:28.083114  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:28.138465  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:28.138499  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:28.229142  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:28.229173  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:28.290207  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:28.290239  682462 logs.go:123] Gathering logs for kube-apiserver [c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe] ...
	I1027 22:38:28.290254  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c4677d9616da25d6029c9e0e1ea1e60fa74107fe3b6a9b66945c7cf6be9901fe"
	I1027 22:38:28.323492  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:28.323519  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:28.374756  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:28.374779  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:29.230085  726897 start.go:307] selected driver: docker
	I1027 22:38:29.230098  726897 start.go:928] validating driver "docker" against &{Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:38:29.230214  726897 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:38:29.231011  726897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:38:29.291602  726897 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:38:29.281919651 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:38:29.291843  726897 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:38:29.291876  726897 cni.go:84] Creating CNI manager for ""
	I1027 22:38:29.291930  726897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:38:29.292027  726897 start.go:351] cluster config:
	{Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:38:29.293350  726897 out.go:179] * Starting "no-preload-188814" primary control-plane node in "no-preload-188814" cluster
	I1027 22:38:29.294199  726897 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:38:29.295405  726897 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:38:29.296574  726897 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:38:29.296666  726897 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:38:29.296713  726897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/config.json ...
	I1027 22:38:29.296851  726897 cache.go:107] acquiring lock: {Name:mk07939a87c1b452f98e2733b4044aaef5b7beb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.296903  726897 cache.go:107] acquiring lock: {Name:mk200c8a2caaaad3c8ed76649a48f615a1ae5be9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297003  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 22:38:29.296993  726897 cache.go:107] acquiring lock: {Name:mk7baa67397d0c68b56096a5558e51581596a4e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297015  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1027 22:38:29.296856  726897 cache.go:107] acquiring lock: {Name:mke466d23cdbe7dd8079b566141851102bac577e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297016  726897 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 181.457µs
	I1027 22:38:29.296996  726897 cache.go:107] acquiring lock: {Name:mk8b6b09ba52dfb608da0a36c4ec3530523b8436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297024  726897 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 145.972µs
	I1027 22:38:29.297043  726897 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 22:38:29.297044  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 22:38:29.297035  726897 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 22:38:29.297053  726897 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 65.933µs
	I1027 22:38:29.297061  726897 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 22:38:29.297052  726897 cache.go:107] acquiring lock: {Name:mkb0147fb3d8ecd8b50c6fd01f6ae7394f0cd687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297064  726897 cache.go:107] acquiring lock: {Name:mk413fcda2edd2da77552c9bdc2211a33f344da6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.296997  726897 cache.go:107] acquiring lock: {Name:mke2de66fafbe14869d74cc23f68775c4135be46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.297086  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 22:38:29.297103  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 22:38:29.297103  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 22:38:29.297107  726897 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 260.614µs
	I1027 22:38:29.297114  726897 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 64.53µs
	I1027 22:38:29.297119  726897 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 55.973µs
	I1027 22:38:29.297126  726897 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 22:38:29.297129  726897 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 22:38:29.297119  726897 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 22:38:29.297167  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 22:38:29.297182  726897 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 257.332µs
	I1027 22:38:29.297195  726897 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 22:38:29.297260  726897 cache.go:115] /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 22:38:29.297285  726897 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 347.911µs
	I1027 22:38:29.297301  726897 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 22:38:29.297313  726897 cache.go:87] Successfully saved all images to host disk.
	I1027 22:38:29.318241  726897 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:38:29.318258  726897 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:38:29.318274  726897 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:38:29.318295  726897 start.go:360] acquireMachinesLock for no-preload-188814: {Name:mkd09e7bc16b18c969a0e9138576a74468fd84c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:29.318343  726897 start.go:364] duration metric: took 33.301µs to acquireMachinesLock for "no-preload-188814"
	I1027 22:38:29.318359  726897 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:38:29.318364  726897 fix.go:55] fixHost starting: 
	I1027 22:38:29.318560  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:29.336530  726897 fix.go:113] recreateIfNeeded on no-preload-188814: state=Stopped err=<nil>
	W1027 22:38:29.336563  726897 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 22:38:29.041631  724915 ssh_runner.go:195] Run: cat /version.json
	I1027 22:38:29.041685  724915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:38:29.041697  724915 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:38:29.041777  724915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:38:29.060306  724915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:38:29.061144  724915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:38:29.223146  724915 ssh_runner.go:195] Run: systemctl --version
	I1027 22:38:29.230438  724915 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:38:29.271974  724915 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:38:29.277406  724915 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:38:29.277491  724915 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:38:29.304532  724915 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:38:29.304554  724915 start.go:496] detecting cgroup driver to use...
	I1027 22:38:29.304585  724915 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:38:29.304635  724915 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:38:29.322688  724915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:38:29.335744  724915 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:38:29.335786  724915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:38:29.352699  724915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:38:29.374182  724915 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:38:29.473914  724915 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:38:29.572856  724915 docker.go:234] disabling docker service ...
	I1027 22:38:29.572930  724915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:38:29.593073  724915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:38:29.606851  724915 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:38:29.696043  724915 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:38:29.785238  724915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:38:29.797842  724915 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:38:29.814936  724915 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:38:29.815044  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.826385  724915 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:38:29.826451  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.836549  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.845608  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.854195  724915 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:38:29.862106  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.870835  724915 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.887847  724915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:29.897744  724915 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:38:29.906837  724915 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:38:29.914659  724915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:30.001441  724915 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:38:30.109745  724915 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:38:30.109821  724915 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:38:30.115286  724915 start.go:564] Will wait 60s for crictl version
	I1027 22:38:30.115350  724915 ssh_runner.go:195] Run: which crictl
	I1027 22:38:30.119126  724915 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:38:30.145039  724915 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:38:30.145116  724915 ssh_runner.go:195] Run: crio --version
	I1027 22:38:30.173331  724915 ssh_runner.go:195] Run: crio --version
	I1027 22:38:30.203902  724915 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 22:38:27.031285  718696 pod_ready.go:104] pod "coredns-5dd5756b68-jwp99" is not "Ready", error: <nil>
	W1027 22:38:29.532236  718696 pod_ready.go:104] pod "coredns-5dd5756b68-jwp99" is not "Ready", error: <nil>
	I1027 22:38:30.531141  718696 pod_ready.go:94] pod "coredns-5dd5756b68-jwp99" is "Ready"
	I1027 22:38:30.531168  718696 pod_ready.go:86] duration metric: took 32.506010253s for pod "coredns-5dd5756b68-jwp99" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.534346  718696 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.538981  718696 pod_ready.go:94] pod "etcd-old-k8s-version-908589" is "Ready"
	I1027 22:38:30.539007  718696 pod_ready.go:86] duration metric: took 4.639408ms for pod "etcd-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.542102  718696 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.546641  718696 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-908589" is "Ready"
	I1027 22:38:30.546667  718696 pod_ready.go:86] duration metric: took 4.542766ms for pod "kube-apiserver-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.549707  718696 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.728780  718696 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-908589" is "Ready"
	I1027 22:38:30.728810  718696 pod_ready.go:86] duration metric: took 179.081738ms for pod "kube-controller-manager-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:30.930032  718696 pod_ready.go:83] waiting for pod "kube-proxy-srms5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:31.328332  718696 pod_ready.go:94] pod "kube-proxy-srms5" is "Ready"
	I1027 22:38:31.328363  718696 pod_ready.go:86] duration metric: took 398.305351ms for pod "kube-proxy-srms5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:31.529129  718696 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:31.928617  718696 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-908589" is "Ready"
	I1027 22:38:31.928639  718696 pod_ready.go:86] duration metric: took 399.480579ms for pod "kube-scheduler-old-k8s-version-908589" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:38:31.928650  718696 pod_ready.go:40] duration metric: took 33.907493908s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:38:31.975577  718696 start.go:626] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1027 22:38:31.976850  718696 out.go:203] 
	W1027 22:38:31.977822  718696 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1027 22:38:31.978931  718696 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1027 22:38:31.980064  718696 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-908589" cluster and "default" namespace by default
	I1027 22:38:30.204927  724915 cli_runner.go:164] Run: docker network inspect embed-certs-829976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:38:30.221604  724915 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 22:38:30.225891  724915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:38:30.236363  724915 kubeadm.go:884] updating cluster {Name:embed-certs-829976 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:38:30.236509  724915 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:38:30.236571  724915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:38:30.270050  724915 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:38:30.270072  724915 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:38:30.270116  724915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:38:30.297812  724915 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:38:30.297838  724915 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:38:30.297848  724915 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 22:38:30.297976  724915 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-829976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:38:30.298057  724915 ssh_runner.go:195] Run: crio config
	I1027 22:38:30.344490  724915 cni.go:84] Creating CNI manager for ""
	I1027 22:38:30.344512  724915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:38:30.344532  724915 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:38:30.344559  724915 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-829976 NodeName:embed-certs-829976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:38:30.344710  724915 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-829976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:38:30.344783  724915 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:38:30.353227  724915 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:38:30.353300  724915 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:38:30.361260  724915 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 22:38:30.374089  724915 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:38:30.389216  724915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 22:38:30.401888  724915 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:38:30.405649  724915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:38:30.415760  724915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:30.495598  724915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:38:30.520529  724915 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976 for IP: 192.168.85.2
	I1027 22:38:30.520554  724915 certs.go:195] generating shared ca certs ...
	I1027 22:38:30.520571  724915 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:30.520726  724915 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:38:30.520771  724915 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:38:30.520782  724915 certs.go:257] generating profile certs ...
	I1027 22:38:30.520840  724915 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.key
	I1027 22:38:30.520853  724915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.crt with IP's: []
	I1027 22:38:31.042927  724915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.crt ...
	I1027 22:38:31.042965  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.crt: {Name:mk2a7ce6744a7951ad65a86fdb0b8152d6cec650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.043174  724915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.key ...
	I1027 22:38:31.043197  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.key: {Name:mk3891a0f4239ba078236dd177d4d9ba77cd835c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.043334  724915 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key.a2d2d0b7
	I1027 22:38:31.043353  724915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt.a2d2d0b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 22:38:31.342123  724915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt.a2d2d0b7 ...
	I1027 22:38:31.342154  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt.a2d2d0b7: {Name:mk99b26975ff00aeeefd15fbd54077d4849c8bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.342377  724915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key.a2d2d0b7 ...
	I1027 22:38:31.342403  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key.a2d2d0b7: {Name:mk7a32134132d91c1918a8248893a7cbcb723e69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.342541  724915 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt.a2d2d0b7 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt
	I1027 22:38:31.342651  724915 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key.a2d2d0b7 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key
	I1027 22:38:31.342713  724915 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key
	I1027 22:38:31.342730  724915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.crt with IP's: []
	I1027 22:38:31.811408  724915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.crt ...
	I1027 22:38:31.811440  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.crt: {Name:mkc0fe77cda16a3d91122f2526bdc4cddd7e68c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.811627  724915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key ...
	I1027 22:38:31.811640  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key: {Name:mk401c1734200a084964c7e10451a046e9211914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:31.811822  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:38:31.811863  724915 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:38:31.811873  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:38:31.811895  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:38:31.811917  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:38:31.811937  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:38:31.811991  724915 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:38:31.812674  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:38:31.832806  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:38:31.851079  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:38:31.868789  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:38:31.886217  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 22:38:31.904131  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:38:31.921599  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:38:31.940340  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:38:31.959296  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:38:31.979333  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:38:32.000109  724915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:38:32.019258  724915 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:38:32.032622  724915 ssh_runner.go:195] Run: openssl version
	I1027 22:38:32.039169  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:38:32.048501  724915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:38:32.053317  724915 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:38:32.053374  724915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:38:32.094191  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:38:32.103714  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:38:32.112580  724915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:32.117102  724915 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:32.117150  724915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:32.152079  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:38:32.161391  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:38:32.170747  724915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:38:32.174592  724915 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:38:32.174647  724915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:38:32.211828  724915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:38:32.221710  724915 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:38:32.225577  724915 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:38:32.225645  724915 kubeadm.go:401] StartCluster: {Name:embed-certs-829976 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:38:32.225772  724915 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:38:32.225831  724915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:38:32.257620  724915 cri.go:89] found id: ""
	I1027 22:38:32.257700  724915 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:38:32.269595  724915 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:38:32.279233  724915 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:38:32.279296  724915 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:38:32.288311  724915 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:38:32.288348  724915 kubeadm.go:158] found existing configuration files:
	
	I1027 22:38:32.288394  724915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:38:32.297700  724915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:38:32.297763  724915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:38:32.305970  724915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:38:32.314247  724915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:38:32.314317  724915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:38:32.322128  724915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:38:32.330891  724915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:38:32.331000  724915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:38:32.339588  724915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:38:32.348810  724915 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:38:32.348877  724915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:38:32.356640  724915 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:38:32.418192  724915 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 22:38:32.478367  724915 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:38:29.338766  726897 out.go:252] * Restarting existing docker container for "no-preload-188814" ...
	I1027 22:38:29.338851  726897 cli_runner.go:164] Run: docker start no-preload-188814
	I1027 22:38:29.598021  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:29.617797  726897 kic.go:430] container "no-preload-188814" state is running.
	I1027 22:38:29.618285  726897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-188814
	I1027 22:38:29.636150  726897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/config.json ...
	I1027 22:38:29.636406  726897 machine.go:94] provisionDockerMachine start ...
	I1027 22:38:29.636506  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:29.660669  726897 main.go:143] libmachine: Using SSH client type: native
	I1027 22:38:29.661015  726897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1027 22:38:29.661035  726897 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:38:29.661741  726897 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47260->127.0.0.1:33073: read: connection reset by peer
	I1027 22:38:32.804540  726897 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-188814
	
	I1027 22:38:32.804570  726897 ubuntu.go:182] provisioning hostname "no-preload-188814"
	I1027 22:38:32.804642  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:32.823996  726897 main.go:143] libmachine: Using SSH client type: native
	I1027 22:38:32.824301  726897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1027 22:38:32.824321  726897 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-188814 && echo "no-preload-188814" | sudo tee /etc/hostname
	I1027 22:38:32.978009  726897 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-188814
	
	I1027 22:38:32.978110  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:32.996457  726897 main.go:143] libmachine: Using SSH client type: native
	I1027 22:38:32.996709  726897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1027 22:38:32.996727  726897 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-188814' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-188814/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-188814' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:38:33.141263  726897 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:38:33.141295  726897 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:38:33.141342  726897 ubuntu.go:190] setting up certificates
	I1027 22:38:33.141361  726897 provision.go:84] configureAuth start
	I1027 22:38:33.141425  726897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-188814
	I1027 22:38:33.164147  726897 provision.go:143] copyHostCerts
	I1027 22:38:33.164221  726897 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:38:33.164250  726897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:38:33.164336  726897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:38:33.164460  726897 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:38:33.164475  726897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:38:33.164517  726897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:38:33.164607  726897 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:38:33.164622  726897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:38:33.164659  726897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:38:33.164727  726897 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.no-preload-188814 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-188814]
	I1027 22:38:33.422338  726897 provision.go:177] copyRemoteCerts
	I1027 22:38:33.422415  726897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:38:33.422472  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:33.441348  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:33.545310  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:38:33.564774  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:38:33.584116  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:38:33.603434  726897 provision.go:87] duration metric: took 462.05285ms to configureAuth
	I1027 22:38:33.603475  726897 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:38:33.603692  726897 config.go:182] Loaded profile config "no-preload-188814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:33.603817  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:33.623515  726897 main.go:143] libmachine: Using SSH client type: native
	I1027 22:38:33.623762  726897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1027 22:38:33.623777  726897 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:38:33.942205  726897 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:38:33.942239  726897 machine.go:97] duration metric: took 4.305804007s to provisionDockerMachine
	I1027 22:38:33.942258  726897 start.go:293] postStartSetup for "no-preload-188814" (driver="docker")
	I1027 22:38:33.942274  726897 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:38:33.942378  726897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:38:33.942436  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:33.963099  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:34.066504  726897 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:38:34.070536  726897 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:38:34.070566  726897 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:38:34.070579  726897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:38:34.070648  726897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:38:34.070753  726897 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:38:34.070868  726897 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:38:34.079610  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:38:34.101775  726897 start.go:296] duration metric: took 159.498893ms for postStartSetup
	I1027 22:38:34.101845  726897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:38:34.101912  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:34.122453  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:30.903174  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:30.903686  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:30.903744  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:30.903802  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:30.931429  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:30.931451  682462 cri.go:89] found id: ""
	I1027 22:38:30.931464  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:30.931531  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:30.935547  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:30.935612  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:30.962130  682462 cri.go:89] found id: ""
	I1027 22:38:30.962162  682462 logs.go:282] 0 containers: []
	W1027 22:38:30.962175  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:30.962188  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:30.962252  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:30.989785  682462 cri.go:89] found id: ""
	I1027 22:38:30.989808  682462 logs.go:282] 0 containers: []
	W1027 22:38:30.989817  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:30.989826  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:30.989885  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:31.017793  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:31.017815  682462 cri.go:89] found id: ""
	I1027 22:38:31.017823  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:31.017882  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:31.022130  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:31.022211  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:31.050645  682462 cri.go:89] found id: ""
	I1027 22:38:31.050671  682462 logs.go:282] 0 containers: []
	W1027 22:38:31.050683  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:31.050691  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:31.050743  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:31.081341  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:31.081367  682462 cri.go:89] found id: "6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:31.081372  682462 cri.go:89] found id: ""
	I1027 22:38:31.081382  682462 logs.go:282] 2 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e]
	I1027 22:38:31.081447  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:31.085582  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:31.089474  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:31.089550  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:31.116522  682462 cri.go:89] found id: ""
	I1027 22:38:31.116550  682462 logs.go:282] 0 containers: []
	W1027 22:38:31.116561  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:31.116579  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:31.116640  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:31.145803  682462 cri.go:89] found id: ""
	I1027 22:38:31.145831  682462 logs.go:282] 0 containers: []
	W1027 22:38:31.145843  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:31.145861  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:31.145876  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:31.166122  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:31.166161  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:31.205622  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:31.205661  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:31.233327  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:31.233357  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:31.293777  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:31.293812  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:31.328603  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:31.328640  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:31.420871  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:31.420909  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:31.483181  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:31.483210  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:31.483228  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:31.538306  682462 logs.go:123] Gathering logs for kube-controller-manager [6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e] ...
	I1027 22:38:31.538346  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:34.069181  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:34.069566  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:34.069620  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:34.069679  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:34.101353  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:34.101376  682462 cri.go:89] found id: ""
	I1027 22:38:34.101386  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:34.101458  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:34.106259  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:34.106333  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:34.138741  682462 cri.go:89] found id: ""
	I1027 22:38:34.138772  682462 logs.go:282] 0 containers: []
	W1027 22:38:34.138784  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:34.138792  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:34.138850  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:34.170189  682462 cri.go:89] found id: ""
	I1027 22:38:34.170214  682462 logs.go:282] 0 containers: []
	W1027 22:38:34.170222  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:34.170229  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:34.170280  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:34.201456  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:34.201482  682462 cri.go:89] found id: ""
	I1027 22:38:34.201494  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:34.201562  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:34.206190  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:34.206276  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:34.241598  682462 cri.go:89] found id: ""
	I1027 22:38:34.241633  682462 logs.go:282] 0 containers: []
	W1027 22:38:34.241649  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:34.241659  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:34.241735  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:34.275599  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:34.275618  682462 cri.go:89] found id: "6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:34.275623  682462 cri.go:89] found id: ""
	I1027 22:38:34.275638  682462 logs.go:282] 2 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e]
	I1027 22:38:34.275691  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:34.226824  726897 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:38:34.232920  726897 fix.go:57] duration metric: took 4.914545091s for fixHost
	I1027 22:38:34.232978  726897 start.go:83] releasing machines lock for "no-preload-188814", held for 4.914623118s
	I1027 22:38:34.233058  726897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-188814
	I1027 22:38:34.253412  726897 ssh_runner.go:195] Run: cat /version.json
	I1027 22:38:34.253477  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:34.253486  726897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:38:34.253572  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:34.275492  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:34.275779  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:34.434291  726897 ssh_runner.go:195] Run: systemctl --version
	I1027 22:38:34.442304  726897 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:38:34.487934  726897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:38:34.493498  726897 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:38:34.493574  726897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:38:34.502757  726897 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:38:34.502784  726897 start.go:496] detecting cgroup driver to use...
	I1027 22:38:34.502852  726897 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:38:34.502914  726897 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:38:34.519974  726897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:38:34.533797  726897 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:38:34.533860  726897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:38:34.551298  726897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:38:34.566077  726897 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:38:34.658336  726897 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:38:34.751649  726897 docker.go:234] disabling docker service ...
	I1027 22:38:34.751715  726897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:38:34.767723  726897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:38:34.783258  726897 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:38:34.866426  726897 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:38:34.952086  726897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:38:34.966046  726897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:38:34.981314  726897 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:38:34.981384  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:34.991313  726897 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:38:34.991378  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.001065  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.010726  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.020553  726897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:38:35.029267  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.038769  726897 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.048795  726897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:38:35.058278  726897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:38:35.065972  726897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:38:35.073828  726897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:35.162332  726897 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:38:35.274891  726897 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:38:35.275017  726897 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:38:35.279706  726897 start.go:564] Will wait 60s for crictl version
	I1027 22:38:35.279796  726897 ssh_runner.go:195] Run: which crictl
	I1027 22:38:35.284406  726897 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:38:35.311426  726897 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:38:35.311526  726897 ssh_runner.go:195] Run: crio --version
	I1027 22:38:35.343236  726897 ssh_runner.go:195] Run: crio --version
	I1027 22:38:35.376620  726897 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:38:35.377706  726897 cli_runner.go:164] Run: docker network inspect no-preload-188814 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:38:35.396543  726897 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 22:38:35.401268  726897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:38:35.412909  726897 kubeadm.go:884] updating cluster {Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:38:35.413061  726897 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:38:35.413100  726897 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:38:35.448113  726897 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:38:35.448142  726897 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:38:35.448153  726897 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 22:38:35.448278  726897 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-188814 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:38:35.448363  726897 ssh_runner.go:195] Run: crio config
	I1027 22:38:35.510488  726897 cni.go:84] Creating CNI manager for ""
	I1027 22:38:35.510512  726897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:38:35.510546  726897 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:38:35.510610  726897 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-188814 NodeName:no-preload-188814 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:38:35.510810  726897 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-188814"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:38:35.510913  726897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:38:35.519895  726897 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:38:35.519981  726897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:38:35.528532  726897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 22:38:35.542424  726897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:38:35.556568  726897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1027 22:38:35.570882  726897 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:38:35.575150  726897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:38:35.586468  726897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:35.668886  726897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:38:35.699129  726897 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814 for IP: 192.168.94.2
	I1027 22:38:35.699154  726897 certs.go:195] generating shared ca certs ...
	I1027 22:38:35.699175  726897 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:35.699339  726897 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:38:35.699395  726897 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:38:35.699409  726897 certs.go:257] generating profile certs ...
	I1027 22:38:35.699513  726897 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.key
	I1027 22:38:35.699593  726897 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.key.c506b838
	I1027 22:38:35.699650  726897 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.key
	I1027 22:38:35.699790  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:38:35.699836  726897 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:38:35.699851  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:38:35.699887  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:38:35.699919  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:38:35.699977  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:38:35.700044  726897 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:38:35.700922  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:38:35.722536  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:38:35.744343  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:38:35.767725  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:38:35.798457  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 22:38:35.817990  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:38:35.843082  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:38:35.862167  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:38:35.881635  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:38:35.901160  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:38:35.922116  726897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:38:35.942874  726897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:38:35.956673  726897 ssh_runner.go:195] Run: openssl version
	I1027 22:38:35.963420  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:38:35.972608  726897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:38:35.976755  726897 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:38:35.976816  726897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:38:36.014377  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:38:36.024514  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:38:36.037057  726897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:38:36.043555  726897 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:38:36.043732  726897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:38:36.085132  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:38:36.094742  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:38:36.104603  726897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:36.109039  726897 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:36.109092  726897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:38:36.145629  726897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:38:36.155102  726897 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:38:36.159502  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:38:36.196158  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:38:36.243545  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:38:36.297875  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:38:36.354989  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:38:36.411613  726897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:38:36.450358  726897 kubeadm.go:401] StartCluster: {Name:no-preload-188814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-188814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:38:36.450479  726897 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:38:36.450563  726897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:38:36.492218  726897 cri.go:89] found id: "cb9c2393e547842667e6423cc2d69ddfd9af4a1579d9d9531bc90992a0e1b634"
	I1027 22:38:36.492258  726897 cri.go:89] found id: "221d83fbd903479a3c762233eb12a7ec04e14004807c2ce9ea61f8e212524c54"
	I1027 22:38:36.492264  726897 cri.go:89] found id: "002c10e5f271a370eae7e9ac4bbcfa8188b01c92b6b9cb7d034828d114167209"
	I1027 22:38:36.492268  726897 cri.go:89] found id: "da762329de2a8c6c1610d73b7afd01c216fefae715c921b854c125c03fe0ac85"
	I1027 22:38:36.492272  726897 cri.go:89] found id: ""
	I1027 22:38:36.492324  726897 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:38:36.510731  726897 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:38:36Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:38:36.511257  726897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:38:36.522504  726897 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:38:36.522526  726897 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:38:36.522577  726897 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:38:36.532923  726897 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:38:36.533814  726897 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-188814" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:38:36.534355  726897 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-482142/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-188814" cluster setting kubeconfig missing "no-preload-188814" context setting]
	I1027 22:38:36.535484  726897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:36.537521  726897 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:38:36.548021  726897 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1027 22:38:36.548065  726897 kubeadm.go:602] duration metric: took 25.532571ms to restartPrimaryControlPlane
	I1027 22:38:36.548089  726897 kubeadm.go:403] duration metric: took 97.734505ms to StartCluster
	I1027 22:38:36.548113  726897 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:36.548208  726897 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:38:36.549445  726897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:36.549735  726897 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:38:36.549834  726897 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:38:36.549940  726897 addons.go:69] Setting storage-provisioner=true in profile "no-preload-188814"
	I1027 22:38:36.549953  726897 addons.go:69] Setting dashboard=true in profile "no-preload-188814"
	I1027 22:38:36.549970  726897 addons.go:238] Setting addon dashboard=true in "no-preload-188814"
	I1027 22:38:36.549970  726897 addons.go:238] Setting addon storage-provisioner=true in "no-preload-188814"
	W1027 22:38:36.549979  726897 addons.go:247] addon storage-provisioner should already be in state true
	W1027 22:38:36.549979  726897 addons.go:247] addon dashboard should already be in state true
	I1027 22:38:36.550003  726897 addons.go:69] Setting default-storageclass=true in profile "no-preload-188814"
	I1027 22:38:36.550041  726897 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-188814"
	I1027 22:38:36.550019  726897 host.go:66] Checking if "no-preload-188814" exists ...
	I1027 22:38:36.550019  726897 host.go:66] Checking if "no-preload-188814" exists ...
	I1027 22:38:36.550017  726897 config.go:182] Loaded profile config "no-preload-188814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:36.550424  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:36.550603  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:36.550718  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:36.553472  726897 out.go:179] * Verifying Kubernetes components...
	I1027 22:38:36.554690  726897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:36.585520  726897 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 22:38:36.590116  726897 addons.go:238] Setting addon default-storageclass=true in "no-preload-188814"
	W1027 22:38:36.590140  726897 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:38:36.590175  726897 host.go:66] Checking if "no-preload-188814" exists ...
	I1027 22:38:36.590656  726897 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:38:36.592985  726897 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 22:38:36.594144  726897 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:38:36.594147  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 22:38:36.594242  726897 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 22:38:36.594307  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:36.595350  726897 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:38:36.595371  726897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:38:36.595426  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:36.621187  726897 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:38:36.621221  726897 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:38:36.621295  726897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:38:36.635220  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:36.647998  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:36.666143  726897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:38:36.779609  726897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:38:36.781790  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 22:38:36.781816  726897 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 22:38:36.803576  726897 node_ready.go:35] waiting up to 6m0s for node "no-preload-188814" to be "Ready" ...
	I1027 22:38:36.819510  726897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:38:36.819660  726897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:38:36.825241  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 22:38:36.825269  726897 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 22:38:36.887037  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 22:38:36.887070  726897 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 22:38:36.926925  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 22:38:36.926968  726897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 22:38:36.945244  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 22:38:36.945273  726897 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 22:38:36.964026  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 22:38:36.964054  726897 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 22:38:36.985295  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 22:38:36.985329  726897 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 22:38:37.002371  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 22:38:37.002488  726897 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 22:38:37.023396  726897 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:38:37.023428  726897 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 22:38:37.039591  726897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:38:38.703060  726897 node_ready.go:49] node "no-preload-188814" is "Ready"
	I1027 22:38:38.703107  726897 node_ready.go:38] duration metric: took 1.899482355s for node "no-preload-188814" to be "Ready" ...
	I1027 22:38:38.703141  726897 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:38:38.703209  726897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:38:34.280220  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:34.284564  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:34.284644  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:34.315498  682462 cri.go:89] found id: ""
	I1027 22:38:34.315528  682462 logs.go:282] 0 containers: []
	W1027 22:38:34.315537  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:34.315545  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:34.315615  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:34.346061  682462 cri.go:89] found id: ""
	I1027 22:38:34.346090  682462 logs.go:282] 0 containers: []
	W1027 22:38:34.346100  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:34.346130  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:34.346147  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:34.379969  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:34.380007  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:34.402037  682462 logs.go:123] Gathering logs for kube-controller-manager [6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e] ...
	I1027 22:38:34.402076  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:34.436434  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:34.436474  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:34.533827  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:34.533865  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:34.605393  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:34.605441  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:34.605461  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:34.646201  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:34.646241  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:34.716659  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:34.716703  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:34.748528  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:34.748560  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:37.308090  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:37.308709  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:37.308785  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:37.308851  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:37.352401  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:37.352430  682462 cri.go:89] found id: ""
	I1027 22:38:37.352441  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:37.352508  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:37.358406  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:37.358480  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:37.401768  682462 cri.go:89] found id: ""
	I1027 22:38:37.401794  682462 logs.go:282] 0 containers: []
	W1027 22:38:37.401804  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:37.401812  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:37.401867  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:37.442805  682462 cri.go:89] found id: ""
	I1027 22:38:37.442837  682462 logs.go:282] 0 containers: []
	W1027 22:38:37.442849  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:37.442858  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:37.442925  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:37.485292  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:37.485427  682462 cri.go:89] found id: ""
	I1027 22:38:37.485452  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:37.485519  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:37.491539  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:37.491610  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:37.532559  682462 cri.go:89] found id: ""
	I1027 22:38:37.532594  682462 logs.go:282] 0 containers: []
	W1027 22:38:37.532605  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:37.532614  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:37.532676  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:37.577708  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:37.577729  682462 cri.go:89] found id: "6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:37.577732  682462 cri.go:89] found id: ""
	I1027 22:38:37.577740  682462 logs.go:282] 2 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e]
	I1027 22:38:37.577789  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:37.584125  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:37.589205  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:37.589276  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:37.640825  682462 cri.go:89] found id: ""
	I1027 22:38:37.640855  682462 logs.go:282] 0 containers: []
	W1027 22:38:37.640884  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:37.640893  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:37.640981  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:37.686439  682462 cri.go:89] found id: ""
	I1027 22:38:37.686549  682462 logs.go:282] 0 containers: []
	W1027 22:38:37.686560  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:37.686579  682462 logs.go:123] Gathering logs for kube-controller-manager [6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e] ...
	I1027 22:38:37.686604  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a9725ba90b4034326daeb7fc4da5322f0cfd49d5ab8d35f31a027a9d6fe563e"
	I1027 22:38:37.735321  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:37.735362  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:37.825453  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:37.825496  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:37.881111  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:37.881152  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:37.928016  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:37.928059  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:37.984481  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:37.984516  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:38.143604  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:38.143650  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:38.182557  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:38.182600  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:38.277080  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:38.277118  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:38.277187  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:39.501932  726897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.682237745s)
	I1027 22:38:39.502025  726897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.682475741s)
	I1027 22:38:39.502155  726897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.462524358s)
	I1027 22:38:39.502186  726897 api_server.go:72] duration metric: took 2.952412975s to wait for apiserver process to appear ...
	I1027 22:38:39.502200  726897 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:38:39.502230  726897 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 22:38:39.503781  726897 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-188814 addons enable metrics-server
	
	I1027 22:38:39.507212  726897 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:38:39.507242  726897 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:38:39.510726  726897 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 22:38:42.322809  724915 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:38:42.322861  724915 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:38:42.322964  724915 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:38:42.323036  724915 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:38:42.323068  724915 kubeadm.go:319] OS: Linux
	I1027 22:38:42.323177  724915 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:38:42.323277  724915 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:38:42.323346  724915 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:38:42.323435  724915 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:38:42.323518  724915 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:38:42.323563  724915 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:38:42.323611  724915 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:38:42.323650  724915 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:38:42.323725  724915 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:38:42.323812  724915 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:38:42.323908  724915 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:38:42.324008  724915 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:38:42.325210  724915 out.go:252]   - Generating certificates and keys ...
	I1027 22:38:42.325301  724915 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:38:42.325363  724915 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:38:42.325449  724915 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:38:42.325543  724915 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:38:42.325646  724915 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:38:42.325741  724915 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:38:42.325854  724915 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:38:42.326006  724915 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-829976 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 22:38:42.326087  724915 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:38:42.326251  724915 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-829976 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 22:38:42.326353  724915 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:38:42.326444  724915 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:38:42.326529  724915 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:38:42.326637  724915 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:38:42.326716  724915 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:38:42.326798  724915 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:38:42.326887  724915 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:38:42.327009  724915 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:38:42.327083  724915 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:38:42.327210  724915 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:38:42.327325  724915 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:38:42.328722  724915 out.go:252]   - Booting up control plane ...
	I1027 22:38:42.328803  724915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:38:42.328870  724915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:38:42.328924  724915 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:38:42.329042  724915 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:38:42.329154  724915 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:38:42.329279  724915 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:38:42.329363  724915 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:38:42.329412  724915 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:38:42.329535  724915 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:38:42.329631  724915 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:38:42.329680  724915 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.026085ms
	I1027 22:38:42.329779  724915 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:38:42.329870  724915 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1027 22:38:42.329970  724915 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:38:42.330048  724915 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:38:42.330120  724915 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.705234911s
	I1027 22:38:42.330179  724915 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.91005687s
	I1027 22:38:42.330241  724915 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502523761s
	I1027 22:38:42.330334  724915 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:38:42.330459  724915 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:38:42.330529  724915 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:38:42.330762  724915 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-829976 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:38:42.330846  724915 kubeadm.go:319] [bootstrap-token] Using token: ra0n2j.d96j3y85d2xm2zyd
	I1027 22:38:42.332220  724915 out.go:252]   - Configuring RBAC rules ...
	I1027 22:38:42.332342  724915 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:38:42.332447  724915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:38:42.332652  724915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:38:42.332838  724915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:38:42.333022  724915 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:38:42.333154  724915 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:38:42.333293  724915 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:38:42.333354  724915 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:38:42.333408  724915 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:38:42.333418  724915 kubeadm.go:319] 
	I1027 22:38:42.333510  724915 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:38:42.333519  724915 kubeadm.go:319] 
	I1027 22:38:42.333606  724915 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:38:42.333616  724915 kubeadm.go:319] 
	I1027 22:38:42.333665  724915 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:38:42.333750  724915 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:38:42.333826  724915 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:38:42.333835  724915 kubeadm.go:319] 
	I1027 22:38:42.333899  724915 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:38:42.333906  724915 kubeadm.go:319] 
	I1027 22:38:42.333977  724915 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:38:42.333987  724915 kubeadm.go:319] 
	I1027 22:38:42.334035  724915 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:38:42.334104  724915 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:38:42.334167  724915 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:38:42.334173  724915 kubeadm.go:319] 
	I1027 22:38:42.334262  724915 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:38:42.334349  724915 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:38:42.334356  724915 kubeadm.go:319] 
	I1027 22:38:42.334494  724915 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ra0n2j.d96j3y85d2xm2zyd \
	I1027 22:38:42.334645  724915 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d \
	I1027 22:38:42.334678  724915 kubeadm.go:319] 	--control-plane 
	I1027 22:38:42.334687  724915 kubeadm.go:319] 
	I1027 22:38:42.334793  724915 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:38:42.334801  724915 kubeadm.go:319] 
	I1027 22:38:42.334885  724915 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ra0n2j.d96j3y85d2xm2zyd \
	I1027 22:38:42.335084  724915 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d 
	I1027 22:38:42.335101  724915 cni.go:84] Creating CNI manager for ""
	I1027 22:38:42.335113  724915 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:38:42.336553  724915 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:38:42.337614  724915 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:38:42.342664  724915 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 22:38:42.342687  724915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:38:42.357415  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:38:42.590246  724915 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:38:42.590350  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:42.590370  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-829976 minikube.k8s.io/updated_at=2025_10_27T22_38_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=embed-certs-829976 minikube.k8s.io/primary=true
	I1027 22:38:42.601266  724915 ops.go:34] apiserver oom_adj: -16
	I1027 22:38:42.682810  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:43.183354  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:43.683665  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:39.512033  726897 addons.go:514] duration metric: took 2.962204357s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 22:38:40.003099  726897 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 22:38:40.007383  726897 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1027 22:38:40.008287  726897 api_server.go:141] control plane version: v1.34.1
	I1027 22:38:40.008312  726897 api_server.go:131] duration metric: took 506.105489ms to wait for apiserver health ...
	I1027 22:38:40.008322  726897 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:38:40.011730  726897 system_pods.go:59] 8 kube-system pods found
	I1027 22:38:40.011760  726897 system_pods.go:61] "coredns-66bc5c9577-m8lfc" [486551a5-b1eb-4fb1-8f1e-ba4a945a2791] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:38:40.011767  726897 system_pods.go:61] "etcd-no-preload-188814" [793ec55b-c1aa-483b-b315-3e75a21d71d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:38:40.011777  726897 system_pods.go:61] "kindnet-thlc6" [9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9] Running
	I1027 22:38:40.011783  726897 system_pods.go:61] "kube-apiserver-no-preload-188814" [572f9081-8ed9-4e69-8d77-0475bcae35b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:38:40.011791  726897 system_pods.go:61] "kube-controller-manager-no-preload-188814" [f2669c26-b7c4-4d32-8dc0-6ef7e15dea21] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:38:40.011796  726897 system_pods.go:61] "kube-proxy-4nwvc" [a82e59ec-7ef7-46aa-a9d3-64a1f8af2222] Running
	I1027 22:38:40.011803  726897 system_pods.go:61] "kube-scheduler-no-preload-188814" [012078bb-8e72-4b64-b7a4-48f33c1a1092] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:38:40.011809  726897 system_pods.go:61] "storage-provisioner" [9bd12118-14fd-4ef6-a0f1-dd7130601f49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:38:40.011817  726897 system_pods.go:74] duration metric: took 3.489312ms to wait for pod list to return data ...
	I1027 22:38:40.011827  726897 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:38:40.013909  726897 default_sa.go:45] found service account: "default"
	I1027 22:38:40.013928  726897 default_sa.go:55] duration metric: took 2.09243ms for default service account to be created ...
	I1027 22:38:40.013938  726897 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:38:40.016626  726897 system_pods.go:86] 8 kube-system pods found
	I1027 22:38:40.016657  726897 system_pods.go:89] "coredns-66bc5c9577-m8lfc" [486551a5-b1eb-4fb1-8f1e-ba4a945a2791] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:38:40.016673  726897 system_pods.go:89] "etcd-no-preload-188814" [793ec55b-c1aa-483b-b315-3e75a21d71d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:38:40.016682  726897 system_pods.go:89] "kindnet-thlc6" [9f6e8c2d-488a-4cf6-b30f-bb55e0c1f8b9] Running
	I1027 22:38:40.016692  726897 system_pods.go:89] "kube-apiserver-no-preload-188814" [572f9081-8ed9-4e69-8d77-0475bcae35b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:38:40.016705  726897 system_pods.go:89] "kube-controller-manager-no-preload-188814" [f2669c26-b7c4-4d32-8dc0-6ef7e15dea21] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:38:40.016711  726897 system_pods.go:89] "kube-proxy-4nwvc" [a82e59ec-7ef7-46aa-a9d3-64a1f8af2222] Running
	I1027 22:38:40.016720  726897 system_pods.go:89] "kube-scheduler-no-preload-188814" [012078bb-8e72-4b64-b7a4-48f33c1a1092] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:38:40.016728  726897 system_pods.go:89] "storage-provisioner" [9bd12118-14fd-4ef6-a0f1-dd7130601f49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:38:40.016740  726897 system_pods.go:126] duration metric: took 2.768995ms to wait for k8s-apps to be running ...
	I1027 22:38:40.016752  726897 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:38:40.016806  726897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:38:40.030591  726897 system_svc.go:56] duration metric: took 13.825821ms WaitForService to wait for kubelet
	I1027 22:38:40.030622  726897 kubeadm.go:587] duration metric: took 3.48085182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:38:40.030642  726897 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:38:40.033599  726897 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:38:40.033633  726897 node_conditions.go:123] node cpu capacity is 8
	I1027 22:38:40.033647  726897 node_conditions.go:105] duration metric: took 3.000721ms to run NodePressure ...
	I1027 22:38:40.033659  726897 start.go:242] waiting for startup goroutines ...
	I1027 22:38:40.033666  726897 start.go:247] waiting for cluster config update ...
	I1027 22:38:40.033676  726897 start.go:256] writing updated cluster config ...
	I1027 22:38:40.033995  726897 ssh_runner.go:195] Run: rm -f paused
	I1027 22:38:40.038455  726897 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:38:40.041959  726897 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m8lfc" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 22:38:42.047766  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	W1027 22:38:44.048612  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	I1027 22:38:40.866028  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:40.866510  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:40.866568  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:40.866630  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:40.901261  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:40.901288  682462 cri.go:89] found id: ""
	I1027 22:38:40.901300  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:40.901364  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:40.906638  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:40.906721  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:40.947742  682462 cri.go:89] found id: ""
	I1027 22:38:40.947774  682462 logs.go:282] 0 containers: []
	W1027 22:38:40.947785  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:40.947793  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:40.947863  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:40.988406  682462 cri.go:89] found id: ""
	I1027 22:38:40.988437  682462 logs.go:282] 0 containers: []
	W1027 22:38:40.988449  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:40.988457  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:40.988524  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:41.021368  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:41.021393  682462 cri.go:89] found id: ""
	I1027 22:38:41.021403  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:41.021461  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:41.026168  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:41.026259  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:41.057538  682462 cri.go:89] found id: ""
	I1027 22:38:41.057569  682462 logs.go:282] 0 containers: []
	W1027 22:38:41.057583  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:41.057592  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:41.057652  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:41.088001  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:41.088025  682462 cri.go:89] found id: ""
	I1027 22:38:41.088034  682462 logs.go:282] 1 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387]
	I1027 22:38:41.088086  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:41.092957  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:41.093049  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:41.124700  682462 cri.go:89] found id: ""
	I1027 22:38:41.124733  682462 logs.go:282] 0 containers: []
	W1027 22:38:41.124746  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:41.124755  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:41.124815  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:41.156323  682462 cri.go:89] found id: ""
	I1027 22:38:41.156356  682462 logs.go:282] 0 containers: []
	W1027 22:38:41.156368  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:41.156382  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:41.156402  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:41.213504  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:41.213548  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:41.244636  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:41.244671  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:41.312449  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:41.312494  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:41.355757  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:41.355788  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:41.462972  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:41.463015  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:41.484161  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:41.484207  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:41.557889  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:41.557919  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:41.557937  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:44.111024  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:44.111534  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:44.111594  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:44.111656  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:44.148703  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:44.148734  682462 cri.go:89] found id: ""
	I1027 22:38:44.148745  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:44.148808  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:44.153837  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:44.153904  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:44.188073  682462 cri.go:89] found id: ""
	I1027 22:38:44.188103  682462 logs.go:282] 0 containers: []
	W1027 22:38:44.188114  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:44.188122  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:44.188184  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:44.220474  682462 cri.go:89] found id: ""
	I1027 22:38:44.220505  682462 logs.go:282] 0 containers: []
	W1027 22:38:44.220518  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:44.220526  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:44.220584  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:44.258910  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:44.258935  682462 cri.go:89] found id: ""
	I1027 22:38:44.258959  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:44.259020  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:44.264353  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:44.264429  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:44.183077  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:44.683244  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:45.182928  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:45.682895  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:46.185997  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:46.683194  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:47.183097  724915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:38:47.276074  724915 kubeadm.go:1114] duration metric: took 4.685836415s to wait for elevateKubeSystemPrivileges
	I1027 22:38:47.276121  724915 kubeadm.go:403] duration metric: took 15.050480479s to StartCluster
	I1027 22:38:47.276145  724915 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:47.276230  724915 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:38:47.278832  724915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:47.279102  724915 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 22:38:47.279117  724915 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:38:47.279198  724915 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:38:47.279295  724915 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-829976"
	I1027 22:38:47.279314  724915 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-829976"
	I1027 22:38:47.279349  724915 host.go:66] Checking if "embed-certs-829976" exists ...
	I1027 22:38:47.279366  724915 config.go:182] Loaded profile config "embed-certs-829976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:47.279464  724915 addons.go:69] Setting default-storageclass=true in profile "embed-certs-829976"
	I1027 22:38:47.279576  724915 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-829976"
	I1027 22:38:47.280005  724915 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:38:47.280013  724915 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:38:47.280720  724915 out.go:179] * Verifying Kubernetes components...
	I1027 22:38:47.282010  724915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:38:47.307257  724915 addons.go:238] Setting addon default-storageclass=true in "embed-certs-829976"
	I1027 22:38:47.307336  724915 host.go:66] Checking if "embed-certs-829976" exists ...
	I1027 22:38:47.307823  724915 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:38:47.308261  724915 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:38:47.310845  724915 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:38:47.310904  724915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:38:47.311026  724915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:38:47.343219  724915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:38:47.349027  724915 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:38:47.349051  724915 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:38:47.349110  724915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:38:47.375279  724915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:38:47.408649  724915 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 22:38:47.475455  724915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:38:47.493197  724915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:38:47.558108  724915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:38:47.735632  724915 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 22:38:47.737709  724915 node_ready.go:35] waiting up to 6m0s for node "embed-certs-829976" to be "Ready" ...
	I1027 22:38:47.985268  724915 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 22:38:47.986663  724915 addons.go:514] duration metric: took 707.467131ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 22:38:48.242041  724915 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-829976" context rescaled to 1 replicas
	W1027 22:38:46.548076  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	W1027 22:38:48.549869  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	I1027 22:38:44.297899  682462 cri.go:89] found id: ""
	I1027 22:38:44.297923  682462 logs.go:282] 0 containers: []
	W1027 22:38:44.297931  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:44.297937  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:44.298021  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:44.326826  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:44.326853  682462 cri.go:89] found id: ""
	I1027 22:38:44.326864  682462 logs.go:282] 1 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387]
	I1027 22:38:44.326920  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:44.331354  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:44.331426  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:44.361847  682462 cri.go:89] found id: ""
	I1027 22:38:44.361875  682462 logs.go:282] 0 containers: []
	W1027 22:38:44.361887  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:44.361894  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:44.361982  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:44.394064  682462 cri.go:89] found id: ""
	I1027 22:38:44.394093  682462 logs.go:282] 0 containers: []
	W1027 22:38:44.394103  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:44.394117  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:44.394131  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:44.467357  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:44.467381  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:44.467398  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:44.510809  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:44.510845  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:44.590456  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:44.590498  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:44.630660  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:44.630700  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:44.714621  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:44.714660  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:44.762564  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:44.762616  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:44.911591  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:44.911632  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:47.440015  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:47.440754  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:47.440810  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:47.440869  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:47.488674  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:47.488705  682462 cri.go:89] found id: ""
	I1027 22:38:47.488715  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:47.488773  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:47.495669  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:47.495738  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:47.550789  682462 cri.go:89] found id: ""
	I1027 22:38:47.550821  682462 logs.go:282] 0 containers: []
	W1027 22:38:47.550831  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:47.550841  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:47.550901  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:47.601002  682462 cri.go:89] found id: ""
	I1027 22:38:47.601033  682462 logs.go:282] 0 containers: []
	W1027 22:38:47.601044  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:47.601053  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:47.601121  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:47.643543  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:47.643568  682462 cri.go:89] found id: ""
	I1027 22:38:47.643578  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:47.643635  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:47.649859  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:47.649967  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:47.698101  682462 cri.go:89] found id: ""
	I1027 22:38:47.698127  682462 logs.go:282] 0 containers: []
	W1027 22:38:47.698139  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:47.698148  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:47.698226  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:47.747674  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:47.747821  682462 cri.go:89] found id: ""
	I1027 22:38:47.747883  682462 logs.go:282] 1 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387]
	I1027 22:38:47.748074  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:47.754667  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:47.754864  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:47.799558  682462 cri.go:89] found id: ""
	I1027 22:38:47.799588  682462 logs.go:282] 0 containers: []
	W1027 22:38:47.799598  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:47.799612  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:47.799676  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:47.836730  682462 cri.go:89] found id: ""
	I1027 22:38:47.836757  682462 logs.go:282] 0 containers: []
	W1027 22:38:47.836767  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:47.836787  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:47.836807  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:47.879545  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:47.879581  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:48.019058  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:48.019090  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:48.044265  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:48.044304  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:48.123238  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:48.123265  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:48.123282  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:48.169671  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:48.169708  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:48.249717  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:48.249762  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:48.288450  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:48.288481  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Oct 27 22:38:12 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:12.32847864Z" level=info msg="Starting container: 05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b" id=b9f3b3b1-8577-47d6-aed9-e1c4c0ec6c1c name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:38:12 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:12.330924696Z" level=info msg="Started container" PID=1681 containerID=05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper id=b9f3b3b1-8577-47d6-aed9-e1c4c0ec6c1c name=/runtime.v1.RuntimeService/StartContainer sandboxID=67146ae18a8a1c99ac76ad9623adf2e88ddbb0590ad2168089e93ddfc353fde6
	Oct 27 22:38:13 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:13.214390958Z" level=info msg="Removing container: 9d5701af5dda9039d363675238269d9c7e24efd0a054f94c4cf7e85901485224" id=b4c34d35-2c7b-4480-8a57-2fbdeb34d217 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:38:13 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:13.224293317Z" level=info msg="Removed container 9d5701af5dda9039d363675238269d9c7e24efd0a054f94c4cf7e85901485224: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper" id=b4c34d35-2c7b-4480-8a57-2fbdeb34d217 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.076533961Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=5511dbdc-c73d-4bf1-a3a5-8d4824425635 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.077474907Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8f46648f-37a4-4f29-ba0b-02ef9d5ebcd5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.079153213Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8/kubernetes-dashboard" id=2498c4a7-07fc-48f0-a3c6-ef53c4c2fec9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.079300788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.083438346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.083606403Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9f478a5fbf3259f63e2ab66b49daddd160d69ebe8f788f9d9be07388d3e85acc/merged/etc/group: no such file or directory"
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.083901606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.11219391Z" level=info msg="Created container 3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8/kubernetes-dashboard" id=2498c4a7-07fc-48f0-a3c6-ef53c4c2fec9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.112740883Z" level=info msg="Starting container: 3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c" id=20af0a11-bcc0-47a1-9e33-969379825c81 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:38:16 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:16.11438879Z" level=info msg="Started container" PID=1732 containerID=3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8/kubernetes-dashboard id=20af0a11-bcc0-47a1-9e33-969379825c81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1573afae1b7cb5e7e910e764107bf01746e149299824c8daf7b3acb03eddef26
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.145163724Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f44c29da-bd29-4e43-80b7-f11c7886f06a name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.146159593Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bb00c238-b3c0-4503-af96-5ed744489653 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.147294481Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper" id=025c1bc2-7c2a-4d6c-84ff-7ce2972e177d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.147512017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.154611097Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.155219147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.180655507Z" level=info msg="Created container f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper" id=025c1bc2-7c2a-4d6c-84ff-7ce2972e177d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.181389729Z" level=info msg="Starting container: f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc" id=3d06e330-fb79-4807-a400-100420ad8832 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.183415712Z" level=info msg="Started container" PID=1758 containerID=f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper id=3d06e330-fb79-4807-a400-100420ad8832 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67146ae18a8a1c99ac76ad9623adf2e88ddbb0590ad2168089e93ddfc353fde6
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.261897267Z" level=info msg="Removing container: 05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b" id=e4e01079-117e-4031-b5c0-2547ddefb0e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:38:31 old-k8s-version-908589 crio[565]: time="2025-10-27T22:38:31.274506774Z" level=info msg="Removed container 05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8/dashboard-metrics-scraper" id=e4e01079-117e-4031-b5c0-2547ddefb0e3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f38488d38d5e2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   67146ae18a8a1       dashboard-metrics-scraper-5f989dc9cf-wbww8       kubernetes-dashboard
	3df1b82967aac       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   34 seconds ago      Running             kubernetes-dashboard        0                   1573afae1b7cb       kubernetes-dashboard-8694d4445c-n7dg8            kubernetes-dashboard
	a5bda7727c540       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Running             storage-provisioner         1                   5759157ef8ca9       storage-provisioner                              kube-system
	21430bbf8df99       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   f27deefa19e93       coredns-5dd5756b68-jwp99                         kube-system
	27f9c91f932c4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   eda6be1b010df       busybox                                          default
	ec59b02f91c0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   5759157ef8ca9       storage-provisioner                              kube-system
	40b5ad6840c82       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   21b149acc96df       kindnet-v6dh4                                    kube-system
	f4690cc69163d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   e06515105471f       kube-proxy-srms5                                 kube-system
	6cdce94a5f78b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   0467483b7743d       kube-controller-manager-old-k8s-version-908589   kube-system
	e64b44ab53a02       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   366ef0e44ad0f       kube-apiserver-old-k8s-version-908589            kube-system
	0552ed0e96ff6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   cccb7c031d152       etcd-old-k8s-version-908589                      kube-system
	e61d7b54f2b00       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   2b22f7f853a64       kube-scheduler-old-k8s-version-908589            kube-system
	
	
	==> coredns [21430bbf8df99df3b9a23d0e6400e2be25bca17ae542da44b69472a011a78162] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43721 - 10261 "HINFO IN 6022255000218777880.585288113833361241. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.03101895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-908589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-908589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=old-k8s-version-908589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_36_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-908589
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:38:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:38:26 +0000   Mon, 27 Oct 2025 22:36:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:38:26 +0000   Mon, 27 Oct 2025 22:36:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:38:26 +0000   Mon, 27 Oct 2025 22:36:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:38:26 +0000   Mon, 27 Oct 2025 22:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-908589
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                150d46e9-6742-4ab0-adb7-789e26ecfc2c
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-jwp99                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     117s
	  kube-system                 etcd-old-k8s-version-908589                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m12s
	  kube-system                 kindnet-v6dh4                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-old-k8s-version-908589             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-old-k8s-version-908589    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-srms5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-old-k8s-version-908589             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-wbww8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-n7dg8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 116s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 2m10s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s               kubelet          Node old-k8s-version-908589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s               kubelet          Node old-k8s-version-908589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s               kubelet          Node old-k8s-version-908589 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           118s               node-controller  Node old-k8s-version-908589 event: Registered Node old-k8s-version-908589 in Controller
	  Normal  NodeReady                104s               kubelet          Node old-k8s-version-908589 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-908589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-908589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-908589 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-908589 event: Registered Node old-k8s-version-908589 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [0552ed0e96ff667dac3ef7da44469e9aecf41285625ff22fbc94d09f10ebe42a] <==
	{"level":"info","ts":"2025-10-27T22:37:53.698796Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-27T22:37:53.699034Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T22:37:53.699113Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T22:37:53.701698Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T22:37:53.701977Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T22:37:53.702019Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T22:37:53.702187Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-27T22:37:53.702302Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-27T22:37:55.591699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-27T22:37:55.591745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-27T22:37:55.591773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-27T22:37:55.591788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-27T22:37:55.591794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-27T22:37:55.591802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-27T22:37:55.591809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-27T22:37:55.592775Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-908589 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T22:37:55.592792Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T22:37:55.592814Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T22:37:55.593048Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T22:37:55.593072Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-27T22:37:55.595069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-27T22:37:55.595098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-10-27T22:38:12.392625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.489449ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789622911278499 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:571 > success:<request_put:<key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:648 >> failure:<request_range:<key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T22:38:12.39296Z","caller":"traceutil/trace.go:171","msg":"trace[2074332013] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"156.820858ms","start":"2025-10-27T22:38:12.236092Z","end":"2025-10-27T22:38:12.392913Z","steps":["trace[2074332013] 'process raft request'  (duration: 48.578294ms)","trace[2074332013] 'compare'  (duration: 107.390141ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:38:12.393265Z","caller":"traceutil/trace.go:171","msg":"trace[347201373] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"156.32995ms","start":"2025-10-27T22:38:12.236919Z","end":"2025-10-27T22:38:12.393249Z","steps":["trace[347201373] 'process raft request'  (duration: 155.821799ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:38:50 up  2:21,  0 user,  load average: 4.20, 2.68, 2.74
	Linux old-k8s-version-908589 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40b5ad6840c82eefedf9a6e76bbd8c07fa3d649ed396affb792017c3f80126e6] <==
	I1027 22:37:57.729608       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:37:57.729884       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1027 22:37:57.730090       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:37:57.730113       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:37:57.730140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:37:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:37:57.837657       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:37:57.837690       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:37:57.837701       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:37:57.837883       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:37:58.230054       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:37:58.230601       1 metrics.go:72] Registering metrics
	I1027 22:37:58.230680       1 controller.go:711] "Syncing nftables rules"
	I1027 22:38:07.838546       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:38:07.838590       1 main.go:301] handling current node
	I1027 22:38:17.838715       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:38:17.838756       1 main.go:301] handling current node
	I1027 22:38:27.837607       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:38:27.837644       1 main.go:301] handling current node
	I1027 22:38:37.843025       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:38:37.843068       1 main.go:301] handling current node
	I1027 22:38:47.844395       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:38:47.844446       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e64b44ab53a02f28c14e5582dc7be12f197b4831f11356e8d5c51aa28e9eff8e] <==
	I1027 22:37:56.512813       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:37:56.515981       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1027 22:37:56.563338       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1027 22:37:56.563401       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1027 22:37:56.563435       1 aggregator.go:166] initial CRD sync complete...
	I1027 22:37:56.563445       1 autoregister_controller.go:141] Starting autoregister controller
	I1027 22:37:56.563453       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:37:56.563462       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:37:56.563726       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 22:37:56.564642       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1027 22:37:56.564670       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1027 22:37:56.571662       1 shared_informer.go:318] Caches are synced for configmaps
	I1027 22:37:56.573805       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1027 22:37:57.364308       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 22:37:57.391005       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 22:37:57.405645       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:37:57.412212       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:37:57.417925       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 22:37:57.455093       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.184.86"}
	I1027 22:37:57.473159       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:37:57.474510       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.98.230"}
	I1027 22:38:09.180429       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1027 22:38:09.532362       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:38:09.532359       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:38:09.631304       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6cdce94a5f78b08c7fa45e7720dfbf6930fe756536de03ceb5d36d0124ee1c23] <==
	I1027 22:38:09.437805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="368.344218ms"
	I1027 22:38:09.437987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.039µs"
	I1027 22:38:09.439463       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-n7dg8"
	I1027 22:38:09.439611       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-wbww8"
	I1027 22:38:09.444683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="260.558473ms"
	I1027 22:38:09.446762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="261.857655ms"
	I1027 22:38:09.451557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.685029ms"
	I1027 22:38:09.451645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.221µs"
	I1027 22:38:09.456183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.397848ms"
	I1027 22:38:09.456279       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.895µs"
	I1027 22:38:09.458007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.536µs"
	I1027 22:38:09.465612       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.238µs"
	I1027 22:38:09.535717       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1027 22:38:09.656110       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 22:38:09.728504       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 22:38:09.728547       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 22:38:12.233546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="102.096µs"
	I1027 22:38:13.224340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.005µs"
	I1027 22:38:14.229109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.477µs"
	I1027 22:38:16.239866       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.608104ms"
	I1027 22:38:16.239982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="70.252µs"
	I1027 22:38:30.105646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.939922ms"
	I1027 22:38:30.105803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.614µs"
	I1027 22:38:31.271551       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.652µs"
	I1027 22:38:39.760724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="113.006µs"
	
	
	==> kube-proxy [f4690cc69163d663fdab691358519ee0401aa190792f240348c12d39a643e5f5] <==
	I1027 22:37:57.537615       1 server_others.go:69] "Using iptables proxy"
	I1027 22:37:57.546576       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1027 22:37:57.566328       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:37:57.569027       1 server_others.go:152] "Using iptables Proxier"
	I1027 22:37:57.569055       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 22:37:57.569061       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 22:37:57.569095       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 22:37:57.569305       1 server.go:846] "Version info" version="v1.28.0"
	I1027 22:37:57.569322       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:37:57.570457       1 config.go:188] "Starting service config controller"
	I1027 22:37:57.570499       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 22:37:57.570534       1 config.go:315] "Starting node config controller"
	I1027 22:37:57.570542       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 22:37:57.570564       1 config.go:97] "Starting endpoint slice config controller"
	I1027 22:37:57.570588       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 22:37:57.671481       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 22:37:57.671609       1 shared_informer.go:318] Caches are synced for service config
	I1027 22:37:57.671606       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e61d7b54f2b00d9f3cc449906592240dfddbc082a601333546e64cbf3aab5c08] <==
	I1027 22:37:53.986671       1 serving.go:348] Generated self-signed cert in-memory
	W1027 22:37:56.499241       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:37:56.499277       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:37:56.499290       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:37:56.499299       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:37:56.525321       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1027 22:37:56.525351       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:37:56.528121       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:37:56.528178       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1027 22:37:56.528534       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1027 22:37:56.528643       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1027 22:37:56.628422       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 22:38:09 old-k8s-version-908589 kubelet[728]: I1027 22:38:09.445904     728 topology_manager.go:215] "Topology Admit Handler" podUID="350e6819-9685-4f35-baab-0b7e8df8513a" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-n7dg8"
	Oct 27 22:38:09 old-k8s-version-908589 kubelet[728]: I1027 22:38:09.574685     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/350e6819-9685-4f35-baab-0b7e8df8513a-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-n7dg8\" (UID: \"350e6819-9685-4f35-baab-0b7e8df8513a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8"
	Oct 27 22:38:09 old-k8s-version-908589 kubelet[728]: I1027 22:38:09.574936     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfx4c\" (UniqueName: \"kubernetes.io/projected/350e6819-9685-4f35-baab-0b7e8df8513a-kube-api-access-dfx4c\") pod \"kubernetes-dashboard-8694d4445c-n7dg8\" (UID: \"350e6819-9685-4f35-baab-0b7e8df8513a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8"
	Oct 27 22:38:09 old-k8s-version-908589 kubelet[728]: I1027 22:38:09.575069     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/02a43cb5-35de-48c4-a04f-de7368d3b206-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-wbww8\" (UID: \"02a43cb5-35de-48c4-a04f-de7368d3b206\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8"
	Oct 27 22:38:09 old-k8s-version-908589 kubelet[728]: I1027 22:38:09.575102     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xp2s\" (UniqueName: \"kubernetes.io/projected/02a43cb5-35de-48c4-a04f-de7368d3b206-kube-api-access-8xp2s\") pod \"dashboard-metrics-scraper-5f989dc9cf-wbww8\" (UID: \"02a43cb5-35de-48c4-a04f-de7368d3b206\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8"
	Oct 27 22:38:12 old-k8s-version-908589 kubelet[728]: I1027 22:38:12.208167     728 scope.go:117] "RemoveContainer" containerID="9d5701af5dda9039d363675238269d9c7e24efd0a054f94c4cf7e85901485224"
	Oct 27 22:38:13 old-k8s-version-908589 kubelet[728]: I1027 22:38:13.213030     728 scope.go:117] "RemoveContainer" containerID="9d5701af5dda9039d363675238269d9c7e24efd0a054f94c4cf7e85901485224"
	Oct 27 22:38:13 old-k8s-version-908589 kubelet[728]: I1027 22:38:13.213239     728 scope.go:117] "RemoveContainer" containerID="05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b"
	Oct 27 22:38:13 old-k8s-version-908589 kubelet[728]: E1027 22:38:13.213649     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wbww8_kubernetes-dashboard(02a43cb5-35de-48c4-a04f-de7368d3b206)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8" podUID="02a43cb5-35de-48c4-a04f-de7368d3b206"
	Oct 27 22:38:14 old-k8s-version-908589 kubelet[728]: I1027 22:38:14.217227     728 scope.go:117] "RemoveContainer" containerID="05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b"
	Oct 27 22:38:14 old-k8s-version-908589 kubelet[728]: E1027 22:38:14.217628     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wbww8_kubernetes-dashboard(02a43cb5-35de-48c4-a04f-de7368d3b206)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8" podUID="02a43cb5-35de-48c4-a04f-de7368d3b206"
	Oct 27 22:38:16 old-k8s-version-908589 kubelet[728]: I1027 22:38:16.234498     728 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-n7dg8" podStartSLOduration=0.933316773 podCreationTimestamp="2025-10-27 22:38:09 +0000 UTC" firstStartedPulling="2025-10-27 22:38:09.775772151 +0000 UTC m=+16.720768255" lastFinishedPulling="2025-10-27 22:38:16.076892136 +0000 UTC m=+23.021888252" observedRunningTime="2025-10-27 22:38:16.234042405 +0000 UTC m=+23.179038530" watchObservedRunningTime="2025-10-27 22:38:16.23443677 +0000 UTC m=+23.179432894"
	Oct 27 22:38:19 old-k8s-version-908589 kubelet[728]: I1027 22:38:19.747832     728 scope.go:117] "RemoveContainer" containerID="05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b"
	Oct 27 22:38:19 old-k8s-version-908589 kubelet[728]: E1027 22:38:19.748280     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wbww8_kubernetes-dashboard(02a43cb5-35de-48c4-a04f-de7368d3b206)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8" podUID="02a43cb5-35de-48c4-a04f-de7368d3b206"
	Oct 27 22:38:31 old-k8s-version-908589 kubelet[728]: I1027 22:38:31.144365     728 scope.go:117] "RemoveContainer" containerID="05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b"
	Oct 27 22:38:31 old-k8s-version-908589 kubelet[728]: I1027 22:38:31.260557     728 scope.go:117] "RemoveContainer" containerID="05027b2fe77999681360dd014e304eabe1dab9403616fadbdec90ab9931eca8b"
	Oct 27 22:38:31 old-k8s-version-908589 kubelet[728]: I1027 22:38:31.260875     728 scope.go:117] "RemoveContainer" containerID="f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc"
	Oct 27 22:38:31 old-k8s-version-908589 kubelet[728]: E1027 22:38:31.261288     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wbww8_kubernetes-dashboard(02a43cb5-35de-48c4-a04f-de7368d3b206)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8" podUID="02a43cb5-35de-48c4-a04f-de7368d3b206"
	Oct 27 22:38:39 old-k8s-version-908589 kubelet[728]: I1027 22:38:39.748316     728 scope.go:117] "RemoveContainer" containerID="f38488d38d5e21aed51bfa063933cbf997e8c2c9a470da5c0cb49b773d2ec2dc"
	Oct 27 22:38:39 old-k8s-version-908589 kubelet[728]: E1027 22:38:39.748748     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wbww8_kubernetes-dashboard(02a43cb5-35de-48c4-a04f-de7368d3b206)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wbww8" podUID="02a43cb5-35de-48c4-a04f-de7368d3b206"
	Oct 27 22:38:45 old-k8s-version-908589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:38:45 old-k8s-version-908589 kubelet[728]: I1027 22:38:45.200773     728 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 27 22:38:45 old-k8s-version-908589 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:38:45 old-k8s-version-908589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 22:38:45 old-k8s-version-908589 systemd[1]: kubelet.service: Consumed 1.478s CPU time.
	
	
	==> kubernetes-dashboard [3df1b82967aac1da231c97daab5e550b5b49a04740d35b9e3e12bc990a982e8c] <==
	2025/10/27 22:38:16 Starting overwatch
	2025/10/27 22:38:16 Using namespace: kubernetes-dashboard
	2025/10/27 22:38:16 Using in-cluster config to connect to apiserver
	2025/10/27 22:38:16 Using secret token for csrf signing
	2025/10/27 22:38:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 22:38:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 22:38:16 Successful initial request to the apiserver, version: v1.28.0
	2025/10/27 22:38:16 Generating JWE encryption key
	2025/10/27 22:38:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 22:38:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 22:38:16 Initializing JWE encryption key from synchronized object
	2025/10/27 22:38:16 Creating in-cluster Sidecar client
	2025/10/27 22:38:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:38:16 Serving insecurely on HTTP port: 9090
	2025/10/27 22:38:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a5bda7727c540811b4409b8ecc67d9d385823f5aa5de84580883039c1baf1935] <==
	I1027 22:37:58.221975       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 22:37:58.231637       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 22:37:58.231681       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 22:38:15.627689       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 22:38:15.627839       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908589_9e22608c-affc-4f7b-8268-ee3ea6c992f9!
	I1027 22:38:15.627806       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a0abcea3-4af1-407b-918c-156849108be7", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-908589_9e22608c-affc-4f7b-8268-ee3ea6c992f9 became leader
	I1027 22:38:15.728104       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-908589_9e22608c-affc-4f7b-8268-ee3ea6c992f9!
	
	
	==> storage-provisioner [ec59b02f91c0b8777c448403a25b84492d518f669cf7e6d1d62914de1ae6d861] <==
	I1027 22:37:57.517049       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 22:37:57.518989       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908589 -n old-k8s-version-908589
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908589 -n old-k8s-version-908589: exit status 2 (351.576528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-908589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-829976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-829976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (285.571532ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:39:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-829976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-829976 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-829976 describe deploy/metrics-server -n kube-system: exit status 1 (80.787822ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-829976 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-829976
helpers_test.go:243: (dbg) docker inspect embed-certs-829976:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4",
	        "Created": "2025-10-27T22:38:24.135878096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 725581,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:38:24.167801476Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/hosts",
	        "LogPath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4-json.log",
	        "Name": "/embed-certs-829976",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-829976:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-829976",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4",
	                "LowerDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-829976",
	                "Source": "/var/lib/docker/volumes/embed-certs-829976/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-829976",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-829976",
	                "name.minikube.sigs.k8s.io": "embed-certs-829976",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb86d028a78520beb08de3386cd806e17e360d002f7b296ddd191e56c0e46584",
	            "SandboxKey": "/var/run/docker/netns/bb86d028a785",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-829976": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:a4:09:a2:25:bd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19326983879b440afd91ddad1f1a29b86b26ac185f059a173d6110952f20d348",
	                    "EndpointID": "5311512cdcee7bfa385980cc1e6099815af3a9f023e5dc3ff589d48b114ff50a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-829976",
	                        "faeaf04da269"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829976 -n embed-certs-829976
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-829976 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-829976 logs -n 25: (1.251920771s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-565903          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ ssh     │ -p NoKubernetes-565903 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-565903          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │                     │
	│ stop    │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-565903 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-565903          │ jenkins │ v1.37.0 │ 27 Oct 25 22:36 UTC │ 27 Oct 25 22:37 UTC │
	│ ssh     │ -p NoKubernetes-565903 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-565903          │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ delete  │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903          │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-908589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ stop    │ -p old-k8s-version-908589 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-908589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-188814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ start   │ -p cert-expiration-219241 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-219241       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ stop    │ -p no-preload-188814 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p cert-expiration-219241                                                                                                                                                                                                                     │ cert-expiration-219241       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ addons  │ enable dashboard -p no-preload-188814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ image   │ old-k8s-version-908589 image list --format=json                                                                                                                                                                                               │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ pause   │ -p old-k8s-version-908589 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ delete  │ -p old-k8s-version-908589                                                                                                                                                                                                                     │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p old-k8s-version-908589                                                                                                                                                                                                                     │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p disable-driver-mounts-617659                                                                                                                                                                                                               │ disable-driver-mounts-617659 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-829976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:38:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:38:55.090154  734045 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:38:55.090274  734045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:38:55.090280  734045 out.go:374] Setting ErrFile to fd 2...
	I1027 22:38:55.090286  734045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:38:55.090631  734045 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:38:55.091183  734045 out.go:368] Setting JSON to false
	I1027 22:38:55.092526  734045 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8474,"bootTime":1761596261,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:38:55.092624  734045 start.go:143] virtualization: kvm guest
	I1027 22:38:55.094465  734045 out.go:179] * [default-k8s-diff-port-927034] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:38:55.095628  734045 notify.go:221] Checking for updates...
	I1027 22:38:55.095667  734045 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:38:55.096819  734045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:38:55.097916  734045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:38:55.098958  734045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:38:55.099953  734045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:38:55.100897  734045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:38:55.102175  734045 config.go:182] Loaded profile config "embed-certs-829976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:55.102264  734045 config.go:182] Loaded profile config "kubernetes-upgrade-695499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:55.102361  734045 config.go:182] Loaded profile config "no-preload-188814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:38:55.102497  734045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:38:55.126321  734045 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:38:55.126467  734045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:38:55.185124  734045 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:38:55.175569217 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:38:55.185271  734045 docker.go:318] overlay module found
	I1027 22:38:55.186811  734045 out.go:179] * Using the docker driver based on user configuration
	I1027 22:38:55.187757  734045 start.go:307] selected driver: docker
	I1027 22:38:55.187773  734045 start.go:928] validating driver "docker" against <nil>
	I1027 22:38:55.187799  734045 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:38:55.188439  734045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:38:55.245633  734045 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:38:55.235666211 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:38:55.245822  734045 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:38:55.246049  734045 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:38:55.247463  734045 out.go:179] * Using Docker driver with root privileges
	I1027 22:38:55.248434  734045 cni.go:84] Creating CNI manager for ""
	I1027 22:38:55.248494  734045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:38:55.248504  734045 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:38:55.248571  734045 start.go:351] cluster config:
	{Name:default-k8s-diff-port-927034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:38:55.249694  734045 out.go:179] * Starting "default-k8s-diff-port-927034" primary control-plane node in "default-k8s-diff-port-927034" cluster
	I1027 22:38:55.250698  734045 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:38:55.251655  734045 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:38:55.252596  734045 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:38:55.252662  734045 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:38:55.252677  734045 cache.go:59] Caching tarball of preloaded images
	I1027 22:38:55.252684  734045 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:38:55.252778  734045 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:38:55.252791  734045 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:38:55.252893  734045 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/config.json ...
	I1027 22:38:55.252918  734045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/config.json: {Name:mk38787b225d84827c8e4d9c4fabc151d93dd4a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:38:55.272842  734045 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:38:55.272860  734045 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:38:55.272877  734045 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:38:55.272913  734045 start.go:360] acquireMachinesLock for default-k8s-diff-port-927034: {Name:mkc19f9b640dd473134a0011bc6c373e550fd190 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:38:55.273042  734045 start.go:364] duration metric: took 106.382µs to acquireMachinesLock for "default-k8s-diff-port-927034"
	I1027 22:38:55.273076  734045 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-927034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927034 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:38:55.273154  734045 start.go:125] createHost starting for "" (driver="docker")
	W1027 22:38:54.241681  724915 node_ready.go:57] node "embed-certs-829976" has "Ready":"False" status (will retry)
	W1027 22:38:56.741256  724915 node_ready.go:57] node "embed-certs-829976" has "Ready":"False" status (will retry)
	I1027 22:38:58.925158  724915 node_ready.go:49] node "embed-certs-829976" is "Ready"
	I1027 22:38:58.925207  724915 node_ready.go:38] duration metric: took 11.187450346s for node "embed-certs-829976" to be "Ready" ...
	I1027 22:38:58.925228  724915 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:38:58.925292  724915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:38:58.946177  724915 api_server.go:72] duration metric: took 11.667024762s to wait for apiserver process to appear ...
	I1027 22:38:58.946207  724915 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:38:58.946467  724915 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 22:38:58.999982  724915 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 22:38:59.001345  724915 api_server.go:141] control plane version: v1.34.1
	I1027 22:38:59.001382  724915 api_server.go:131] duration metric: took 55.165736ms to wait for apiserver health ...
	I1027 22:38:59.001394  724915 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:38:59.004593  724915 system_pods.go:59] 8 kube-system pods found
	I1027 22:38:59.004637  724915 system_pods.go:61] "coredns-66bc5c9577-msbj9" [eabc58bc-8437-422d-bed2-b0d37d4bb14b] Pending
	I1027 22:38:59.004645  724915 system_pods.go:61] "etcd-embed-certs-829976" [4c420d10-88b4-4e9b-8edc-73a2bcb14fe3] Running
	I1027 22:38:59.004657  724915 system_pods.go:61] "kindnet-dtjql" [8e75d998-47cc-4e2c-b1d2-7b6069c821f8] Running
	I1027 22:38:59.004661  724915 system_pods.go:61] "kube-apiserver-embed-certs-829976" [dab60253-4b47-45bc-a7d0-21de852d913c] Running
	I1027 22:38:59.004665  724915 system_pods.go:61] "kube-controller-manager-embed-certs-829976" [434b07e1-c7e4-41f9-a8de-5d24091f627c] Running
	I1027 22:38:59.004668  724915 system_pods.go:61] "kube-proxy-gf725" [3751b38d-bae6-4ea8-9669-346eb3fd7457] Running
	I1027 22:38:59.004672  724915 system_pods.go:61] "kube-scheduler-embed-certs-829976" [479c9aa0-d1dd-416c-94fe-53a85d338715] Running
	I1027 22:38:59.004675  724915 system_pods.go:61] "storage-provisioner" [fcbb9eb6-2144-438f-abf4-a4bd189f88f7] Pending
	I1027 22:38:59.004681  724915 system_pods.go:74] duration metric: took 3.281145ms to wait for pod list to return data ...
	I1027 22:38:59.004692  724915 default_sa.go:34] waiting for default service account to be created ...
	W1027 22:38:55.048367  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	W1027 22:38:57.548241  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	I1027 22:38:54.283432  682462 cri.go:89] found id: ""
	I1027 22:38:54.283460  682462 logs.go:282] 0 containers: []
	W1027 22:38:54.283478  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:54.283492  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:54.283505  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:54.302514  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:54.302537  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:54.358676  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:54.358695  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:54.358709  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:54.393422  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:54.393451  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:54.459905  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:54.459953  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:54.489029  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:54.489067  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:54.549263  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:54.549297  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:54.580514  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:54.580542  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:57.187018  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:38:57.187488  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:38:57.187549  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:38:57.187600  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:38:57.218418  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:57.218446  682462 cri.go:89] found id: ""
	I1027 22:38:57.218457  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:38:57.218521  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:57.222492  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:38:57.222556  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:38:57.251353  682462 cri.go:89] found id: ""
	I1027 22:38:57.251385  682462 logs.go:282] 0 containers: []
	W1027 22:38:57.251396  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:38:57.251406  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:38:57.251463  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:38:57.280837  682462 cri.go:89] found id: ""
	I1027 22:38:57.280869  682462 logs.go:282] 0 containers: []
	W1027 22:38:57.280880  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:38:57.280889  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:38:57.280963  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:38:57.314439  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:57.314467  682462 cri.go:89] found id: ""
	I1027 22:38:57.314479  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:38:57.314541  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:57.318694  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:38:57.318753  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:38:57.347110  682462 cri.go:89] found id: ""
	I1027 22:38:57.347141  682462 logs.go:282] 0 containers: []
	W1027 22:38:57.347152  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:38:57.347159  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:38:57.347220  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:38:57.374903  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:57.374932  682462 cri.go:89] found id: ""
	I1027 22:38:57.374974  682462 logs.go:282] 1 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387]
	I1027 22:38:57.375032  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:38:57.379034  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:38:57.379096  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:38:57.411571  682462 cri.go:89] found id: ""
	I1027 22:38:57.411595  682462 logs.go:282] 0 containers: []
	W1027 22:38:57.411606  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:38:57.411613  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:38:57.411670  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:38:57.440706  682462 cri.go:89] found id: ""
	I1027 22:38:57.440739  682462 logs.go:282] 0 containers: []
	W1027 22:38:57.440750  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:38:57.440763  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:38:57.440780  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:38:57.460105  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:38:57.460135  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:38:57.521773  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:38:57.521801  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:38:57.521822  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:38:57.556802  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:38:57.556838  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:38:57.618717  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:38:57.618750  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:38:57.647810  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:38:57.647836  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:38:57.715716  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:38:57.715753  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:38:57.750152  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:38:57.750180  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:38:55.274696  734045 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:38:55.274894  734045 start.go:159] libmachine.API.Create for "default-k8s-diff-port-927034" (driver="docker")
	I1027 22:38:55.274965  734045 client.go:173] LocalClient.Create starting
	I1027 22:38:55.275036  734045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 22:38:55.275068  734045 main.go:143] libmachine: Decoding PEM data...
	I1027 22:38:55.275083  734045 main.go:143] libmachine: Parsing certificate...
	I1027 22:38:55.275155  734045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 22:38:55.275183  734045 main.go:143] libmachine: Decoding PEM data...
	I1027 22:38:55.275194  734045 main.go:143] libmachine: Parsing certificate...
	I1027 22:38:55.275543  734045 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-927034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:38:55.291490  734045 cli_runner.go:211] docker network inspect default-k8s-diff-port-927034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:38:55.291562  734045 network_create.go:284] running [docker network inspect default-k8s-diff-port-927034] to gather additional debugging logs...
	I1027 22:38:55.291578  734045 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-927034
	W1027 22:38:55.307657  734045 cli_runner.go:211] docker network inspect default-k8s-diff-port-927034 returned with exit code 1
	I1027 22:38:55.307697  734045 network_create.go:287] error running [docker network inspect default-k8s-diff-port-927034]: docker network inspect default-k8s-diff-port-927034: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-927034 not found
	I1027 22:38:55.307713  734045 network_create.go:289] output of [docker network inspect default-k8s-diff-port-927034]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-927034 not found
	
	** /stderr **
	I1027 22:38:55.307864  734045 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:38:55.325353  734045 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
	I1027 22:38:55.326306  734045 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2deffb37428 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:63:99:4f:c9:29} reservation:<nil>}
	I1027 22:38:55.326874  734045 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8aa1ad217c0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:19:7b:f4:de:20} reservation:<nil>}
	I1027 22:38:55.327556  734045 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a2ac9625014b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:fb:26:35:6f:70} reservation:<nil>}
	I1027 22:38:55.328537  734045 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-19326983879b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:fd:92:c2:f9:aa} reservation:<nil>}
	I1027 22:38:55.329293  734045 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ae03ff1f23a6 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ee:9c:69:53:fa:67} reservation:<nil>}
	I1027 22:38:55.330176  734045 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f3c230}
	I1027 22:38:55.330211  734045 network_create.go:124] attempt to create docker network default-k8s-diff-port-927034 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1027 22:38:55.330257  734045 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-927034 default-k8s-diff-port-927034
	I1027 22:38:55.391903  734045 network_create.go:108] docker network default-k8s-diff-port-927034 192.168.103.0/24 created
	I1027 22:38:55.391934  734045 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-927034" container
	I1027 22:38:55.392040  734045 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:38:55.409276  734045 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-927034 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-927034 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:38:55.426669  734045 oci.go:103] Successfully created a docker volume default-k8s-diff-port-927034
	I1027 22:38:55.426764  734045 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-927034-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-927034 --entrypoint /usr/bin/test -v default-k8s-diff-port-927034:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:38:55.793974  734045 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-927034
	I1027 22:38:55.794040  734045 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:38:55.794070  734045 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:38:55.794163  734045 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-927034:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:38:59.186507  724915 default_sa.go:45] found service account: "default"
	I1027 22:38:59.186537  724915 default_sa.go:55] duration metric: took 181.837105ms for default service account to be created ...
	I1027 22:38:59.186549  724915 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:38:59.190146  724915 system_pods.go:86] 8 kube-system pods found
	I1027 22:38:59.190179  724915 system_pods.go:89] "coredns-66bc5c9577-msbj9" [eabc58bc-8437-422d-bed2-b0d37d4bb14b] Pending
	I1027 22:38:59.190187  724915 system_pods.go:89] "etcd-embed-certs-829976" [4c420d10-88b4-4e9b-8edc-73a2bcb14fe3] Running
	I1027 22:38:59.190200  724915 system_pods.go:89] "kindnet-dtjql" [8e75d998-47cc-4e2c-b1d2-7b6069c821f8] Running
	I1027 22:38:59.190205  724915 system_pods.go:89] "kube-apiserver-embed-certs-829976" [dab60253-4b47-45bc-a7d0-21de852d913c] Running
	I1027 22:38:59.190211  724915 system_pods.go:89] "kube-controller-manager-embed-certs-829976" [434b07e1-c7e4-41f9-a8de-5d24091f627c] Running
	I1027 22:38:59.190216  724915 system_pods.go:89] "kube-proxy-gf725" [3751b38d-bae6-4ea8-9669-346eb3fd7457] Running
	I1027 22:38:59.190220  724915 system_pods.go:89] "kube-scheduler-embed-certs-829976" [479c9aa0-d1dd-416c-94fe-53a85d338715] Running
	I1027 22:38:59.190226  724915 system_pods.go:89] "storage-provisioner" [fcbb9eb6-2144-438f-abf4-a4bd189f88f7] Pending
	I1027 22:38:59.190264  724915 retry.go:31] will retry after 242.179327ms: missing components: kube-dns
	I1027 22:38:59.531468  724915 system_pods.go:86] 8 kube-system pods found
	I1027 22:38:59.531527  724915 system_pods.go:89] "coredns-66bc5c9577-msbj9" [eabc58bc-8437-422d-bed2-b0d37d4bb14b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:38:59.531539  724915 system_pods.go:89] "etcd-embed-certs-829976" [4c420d10-88b4-4e9b-8edc-73a2bcb14fe3] Running
	I1027 22:38:59.531551  724915 system_pods.go:89] "kindnet-dtjql" [8e75d998-47cc-4e2c-b1d2-7b6069c821f8] Running
	I1027 22:38:59.531556  724915 system_pods.go:89] "kube-apiserver-embed-certs-829976" [dab60253-4b47-45bc-a7d0-21de852d913c] Running
	I1027 22:38:59.531565  724915 system_pods.go:89] "kube-controller-manager-embed-certs-829976" [434b07e1-c7e4-41f9-a8de-5d24091f627c] Running
	I1027 22:38:59.531569  724915 system_pods.go:89] "kube-proxy-gf725" [3751b38d-bae6-4ea8-9669-346eb3fd7457] Running
	I1027 22:38:59.531574  724915 system_pods.go:89] "kube-scheduler-embed-certs-829976" [479c9aa0-d1dd-416c-94fe-53a85d338715] Running
	I1027 22:38:59.531581  724915 system_pods.go:89] "storage-provisioner" [fcbb9eb6-2144-438f-abf4-a4bd189f88f7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:38:59.531607  724915 retry.go:31] will retry after 310.366002ms: missing components: kube-dns
	I1027 22:38:59.902398  724915 system_pods.go:86] 8 kube-system pods found
	I1027 22:38:59.902438  724915 system_pods.go:89] "coredns-66bc5c9577-msbj9" [eabc58bc-8437-422d-bed2-b0d37d4bb14b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:38:59.902462  724915 system_pods.go:89] "etcd-embed-certs-829976" [4c420d10-88b4-4e9b-8edc-73a2bcb14fe3] Running
	I1027 22:38:59.902472  724915 system_pods.go:89] "kindnet-dtjql" [8e75d998-47cc-4e2c-b1d2-7b6069c821f8] Running
	I1027 22:38:59.902486  724915 system_pods.go:89] "kube-apiserver-embed-certs-829976" [dab60253-4b47-45bc-a7d0-21de852d913c] Running
	I1027 22:38:59.902493  724915 system_pods.go:89] "kube-controller-manager-embed-certs-829976" [434b07e1-c7e4-41f9-a8de-5d24091f627c] Running
	I1027 22:38:59.902501  724915 system_pods.go:89] "kube-proxy-gf725" [3751b38d-bae6-4ea8-9669-346eb3fd7457] Running
	I1027 22:38:59.902507  724915 system_pods.go:89] "kube-scheduler-embed-certs-829976" [479c9aa0-d1dd-416c-94fe-53a85d338715] Running
	I1027 22:38:59.902512  724915 system_pods.go:89] "storage-provisioner" [fcbb9eb6-2144-438f-abf4-a4bd189f88f7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:38:59.902538  724915 retry.go:31] will retry after 411.458801ms: missing components: kube-dns
	I1027 22:39:00.319393  724915 system_pods.go:86] 8 kube-system pods found
	I1027 22:39:00.319435  724915 system_pods.go:89] "coredns-66bc5c9577-msbj9" [eabc58bc-8437-422d-bed2-b0d37d4bb14b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:39:00.319445  724915 system_pods.go:89] "etcd-embed-certs-829976" [4c420d10-88b4-4e9b-8edc-73a2bcb14fe3] Running
	I1027 22:39:00.319453  724915 system_pods.go:89] "kindnet-dtjql" [8e75d998-47cc-4e2c-b1d2-7b6069c821f8] Running
	I1027 22:39:00.319470  724915 system_pods.go:89] "kube-apiserver-embed-certs-829976" [dab60253-4b47-45bc-a7d0-21de852d913c] Running
	I1027 22:39:00.319477  724915 system_pods.go:89] "kube-controller-manager-embed-certs-829976" [434b07e1-c7e4-41f9-a8de-5d24091f627c] Running
	I1027 22:39:00.319482  724915 system_pods.go:89] "kube-proxy-gf725" [3751b38d-bae6-4ea8-9669-346eb3fd7457] Running
	I1027 22:39:00.319494  724915 system_pods.go:89] "kube-scheduler-embed-certs-829976" [479c9aa0-d1dd-416c-94fe-53a85d338715] Running
	I1027 22:39:00.319501  724915 system_pods.go:89] "storage-provisioner" [fcbb9eb6-2144-438f-abf4-a4bd189f88f7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:39:00.319522  724915 retry.go:31] will retry after 504.686393ms: missing components: kube-dns
	I1027 22:39:00.831512  724915 system_pods.go:86] 8 kube-system pods found
	I1027 22:39:00.831549  724915 system_pods.go:89] "coredns-66bc5c9577-msbj9" [eabc58bc-8437-422d-bed2-b0d37d4bb14b] Running
	I1027 22:39:00.831557  724915 system_pods.go:89] "etcd-embed-certs-829976" [4c420d10-88b4-4e9b-8edc-73a2bcb14fe3] Running
	I1027 22:39:00.831562  724915 system_pods.go:89] "kindnet-dtjql" [8e75d998-47cc-4e2c-b1d2-7b6069c821f8] Running
	I1027 22:39:00.831568  724915 system_pods.go:89] "kube-apiserver-embed-certs-829976" [dab60253-4b47-45bc-a7d0-21de852d913c] Running
	I1027 22:39:00.831575  724915 system_pods.go:89] "kube-controller-manager-embed-certs-829976" [434b07e1-c7e4-41f9-a8de-5d24091f627c] Running
	I1027 22:39:00.831579  724915 system_pods.go:89] "kube-proxy-gf725" [3751b38d-bae6-4ea8-9669-346eb3fd7457] Running
	I1027 22:39:00.831584  724915 system_pods.go:89] "kube-scheduler-embed-certs-829976" [479c9aa0-d1dd-416c-94fe-53a85d338715] Running
	I1027 22:39:00.831589  724915 system_pods.go:89] "storage-provisioner" [fcbb9eb6-2144-438f-abf4-a4bd189f88f7] Running
	I1027 22:39:00.831599  724915 system_pods.go:126] duration metric: took 1.645042481s to wait for k8s-apps to be running ...
	I1027 22:39:00.831619  724915 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:39:00.831669  724915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:39:00.845478  724915 system_svc.go:56] duration metric: took 13.851514ms WaitForService to wait for kubelet
	I1027 22:39:00.845509  724915 kubeadm.go:587] duration metric: took 13.566362433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:39:00.845536  724915 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:39:00.848643  724915 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:39:00.848672  724915 node_conditions.go:123] node cpu capacity is 8
	I1027 22:39:00.848691  724915 node_conditions.go:105] duration metric: took 3.14975ms to run NodePressure ...
	I1027 22:39:00.848706  724915 start.go:242] waiting for startup goroutines ...
	I1027 22:39:00.848719  724915 start.go:247] waiting for cluster config update ...
	I1027 22:39:00.848735  724915 start.go:256] writing updated cluster config ...
	I1027 22:39:00.849074  724915 ssh_runner.go:195] Run: rm -f paused
	I1027 22:39:00.853344  724915 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:39:00.856865  724915 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-msbj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:00.861697  724915 pod_ready.go:94] pod "coredns-66bc5c9577-msbj9" is "Ready"
	I1027 22:39:00.861718  724915 pod_ready.go:86] duration metric: took 4.828261ms for pod "coredns-66bc5c9577-msbj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:00.864039  724915 pod_ready.go:83] waiting for pod "etcd-embed-certs-829976" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:00.868503  724915 pod_ready.go:94] pod "etcd-embed-certs-829976" is "Ready"
	I1027 22:39:00.868525  724915 pod_ready.go:86] duration metric: took 4.454792ms for pod "etcd-embed-certs-829976" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:00.870801  724915 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-829976" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:00.875039  724915 pod_ready.go:94] pod "kube-apiserver-embed-certs-829976" is "Ready"
	I1027 22:39:00.875059  724915 pod_ready.go:86] duration metric: took 4.233852ms for pod "kube-apiserver-embed-certs-829976" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:00.877290  724915 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-829976" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:01.258357  724915 pod_ready.go:94] pod "kube-controller-manager-embed-certs-829976" is "Ready"
	I1027 22:39:01.258387  724915 pod_ready.go:86] duration metric: took 381.076426ms for pod "kube-controller-manager-embed-certs-829976" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:01.458228  724915 pod_ready.go:83] waiting for pod "kube-proxy-gf725" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:01.857606  724915 pod_ready.go:94] pod "kube-proxy-gf725" is "Ready"
	I1027 22:39:01.857633  724915 pod_ready.go:86] duration metric: took 399.373677ms for pod "kube-proxy-gf725" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:02.058129  724915 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-829976" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:02.457001  724915 pod_ready.go:94] pod "kube-scheduler-embed-certs-829976" is "Ready"
	I1027 22:39:02.457029  724915 pod_ready.go:86] duration metric: took 398.874743ms for pod "kube-scheduler-embed-certs-829976" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:02.457055  724915 pod_ready.go:40] duration metric: took 1.603678179s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:39:02.502244  724915 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:39:02.503998  724915 out.go:179] * Done! kubectl is now configured to use "embed-certs-829976" cluster and "default" namespace by default
	I1027 22:39:00.289809  734045 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-927034:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.495573785s)
	I1027 22:39:00.289841  734045 kic.go:203] duration metric: took 4.495767596s to extract preloaded images to volume ...
	W1027 22:39:00.289940  734045 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 22:39:00.290012  734045 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 22:39:00.290060  734045 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:39:00.347417  734045 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-927034 --name default-k8s-diff-port-927034 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-927034 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-927034 --network default-k8s-diff-port-927034 --ip 192.168.103.2 --volume default-k8s-diff-port-927034:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:39:00.636082  734045 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927034 --format={{.State.Running}}
	I1027 22:39:00.656342  734045 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927034 --format={{.State.Status}}
	I1027 22:39:00.678405  734045 cli_runner.go:164] Run: docker exec default-k8s-diff-port-927034 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:39:00.738772  734045 oci.go:144] the created container "default-k8s-diff-port-927034" has a running status.
	I1027 22:39:00.738815  734045 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/default-k8s-diff-port-927034/id_rsa...
	I1027 22:39:01.116642  734045 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/default-k8s-diff-port-927034/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:39:01.142046  734045 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927034 --format={{.State.Status}}
	I1027 22:39:01.161219  734045 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:39:01.161251  734045 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-927034 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:39:01.204166  734045 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927034 --format={{.State.Status}}
	I1027 22:39:01.222320  734045 machine.go:94] provisionDockerMachine start ...
	I1027 22:39:01.222448  734045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927034
	I1027 22:39:01.241636  734045 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:01.241907  734045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1027 22:39:01.241930  734045 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:39:01.386935  734045 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-927034
	
	I1027 22:39:01.386992  734045 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-927034"
	I1027 22:39:01.387064  734045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927034
	I1027 22:39:01.406678  734045 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:01.406896  734045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1027 22:39:01.406920  734045 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-927034 && echo "default-k8s-diff-port-927034" | sudo tee /etc/hostname
	I1027 22:39:01.563782  734045 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-927034
	
	I1027 22:39:01.563893  734045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927034
	I1027 22:39:01.583047  734045 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:01.583348  734045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1027 22:39:01.583370  734045 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-927034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-927034/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-927034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:39:01.726176  734045 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:39:01.726219  734045 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:39:01.726244  734045 ubuntu.go:190] setting up certificates
	I1027 22:39:01.726264  734045 provision.go:84] configureAuth start
	I1027 22:39:01.726338  734045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-927034
	I1027 22:39:01.744612  734045 provision.go:143] copyHostCerts
	I1027 22:39:01.744684  734045 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:39:01.744702  734045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:39:01.744789  734045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:39:01.744924  734045 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:39:01.744938  734045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:39:01.744991  734045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:39:01.745090  734045 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:39:01.745102  734045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:39:01.745140  734045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:39:01.745230  734045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-927034 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-927034 localhost minikube]
	I1027 22:39:01.958738  734045 provision.go:177] copyRemoteCerts
	I1027 22:39:01.958806  734045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:39:01.958855  734045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927034
	I1027 22:39:01.976997  734045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/default-k8s-diff-port-927034/id_rsa Username:docker}
	I1027 22:39:02.079453  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:39:02.098673  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1027 22:39:02.116585  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:39:02.135045  734045 provision.go:87] duration metric: took 408.761519ms to configureAuth
	I1027 22:39:02.135077  734045 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:39:02.135276  734045 config.go:182] Loaded profile config "default-k8s-diff-port-927034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:02.135405  734045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927034
	I1027 22:39:02.152859  734045 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:02.153118  734045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1027 22:39:02.153141  734045 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:39:02.408016  734045 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:39:02.408050  734045 machine.go:97] duration metric: took 1.185701959s to provisionDockerMachine
	I1027 22:39:02.408061  734045 client.go:176] duration metric: took 7.133086613s to LocalClient.Create
	I1027 22:39:02.408083  734045 start.go:167] duration metric: took 7.133189991s to libmachine.API.Create "default-k8s-diff-port-927034"
	I1027 22:39:02.408093  734045 start.go:293] postStartSetup for "default-k8s-diff-port-927034" (driver="docker")
	I1027 22:39:02.408102  734045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:39:02.408164  734045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:39:02.408213  734045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927034
	I1027 22:39:02.426972  734045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/default-k8s-diff-port-927034/id_rsa Username:docker}
	I1027 22:39:02.532609  734045 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:39:02.536119  734045 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:39:02.536154  734045 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:39:02.536167  734045 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:39:02.536247  734045 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:39:02.536339  734045 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:39:02.536447  734045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:39:02.544205  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:02.565096  734045 start.go:296] duration metric: took 156.987696ms for postStartSetup
	I1027 22:39:02.565411  734045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-927034
	I1027 22:39:02.584907  734045 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/config.json ...
	I1027 22:39:02.585198  734045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:39:02.585257  734045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927034
	I1027 22:39:02.603173  734045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/default-k8s-diff-port-927034/id_rsa Username:docker}
	I1027 22:39:02.704387  734045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:39:02.709161  734045 start.go:128] duration metric: took 7.435970975s to createHost
	I1027 22:39:02.709190  734045 start.go:83] releasing machines lock for "default-k8s-diff-port-927034", held for 7.43613074s
	I1027 22:39:02.709273  734045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-927034
	I1027 22:39:02.726722  734045 ssh_runner.go:195] Run: cat /version.json
	I1027 22:39:02.726769  734045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927034
	I1027 22:39:02.726804  734045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:39:02.726879  734045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927034
	I1027 22:39:02.744474  734045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/default-k8s-diff-port-927034/id_rsa Username:docker}
	I1027 22:39:02.745488  734045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/default-k8s-diff-port-927034/id_rsa Username:docker}
	I1027 22:39:02.916010  734045 ssh_runner.go:195] Run: systemctl --version
	I1027 22:39:02.923567  734045 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:39:02.960851  734045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:39:02.966398  734045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:39:02.966490  734045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:39:02.996611  734045 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:39:02.996636  734045 start.go:496] detecting cgroup driver to use...
	I1027 22:39:02.996667  734045 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:39:02.996708  734045 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:39:03.013703  734045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:39:03.026481  734045 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:39:03.026536  734045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:39:03.043454  734045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:39:03.061893  734045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:39:03.148316  734045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:39:03.237327  734045 docker.go:234] disabling docker service ...
	I1027 22:39:03.237388  734045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:39:03.256110  734045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:39:03.268889  734045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:39:03.352280  734045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:39:03.438706  734045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:39:03.453176  734045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:39:03.470594  734045 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:39:03.470650  734045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:03.482419  734045 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:39:03.482498  734045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:03.492862  734045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:03.502928  734045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:03.513788  734045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:39:03.523188  734045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:03.533475  734045 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:03.549571  734045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:03.560406  734045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:39:03.568852  734045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:39:03.577484  734045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:03.668610  734045 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:39:03.789344  734045 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:39:03.789436  734045 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:39:03.794339  734045 start.go:564] Will wait 60s for crictl version
	I1027 22:39:03.794408  734045 ssh_runner.go:195] Run: which crictl
	I1027 22:39:03.798743  734045 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:39:03.827659  734045 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:39:03.827749  734045 ssh_runner.go:195] Run: crio --version
	I1027 22:39:03.863571  734045 ssh_runner.go:195] Run: crio --version
	I1027 22:39:03.896774  734045 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 22:38:59.555629  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	W1027 22:39:02.048541  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	W1027 22:39:04.048615  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	I1027 22:39:00.342614  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:39:00.343092  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:39:00.343146  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:39:00.343195  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:39:00.375711  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:39:00.375736  682462 cri.go:89] found id: ""
	I1027 22:39:00.375747  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:39:00.375806  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:39:00.380467  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:39:00.380542  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:39:00.414408  682462 cri.go:89] found id: ""
	I1027 22:39:00.414440  682462 logs.go:282] 0 containers: []
	W1027 22:39:00.414454  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:39:00.414461  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:39:00.414524  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:39:00.445700  682462 cri.go:89] found id: ""
	I1027 22:39:00.445726  682462 logs.go:282] 0 containers: []
	W1027 22:39:00.445737  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:39:00.445745  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:39:00.445807  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:39:00.481522  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:39:00.481554  682462 cri.go:89] found id: ""
	I1027 22:39:00.481566  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:39:00.481633  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:39:00.486737  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:39:00.486812  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:39:00.518179  682462 cri.go:89] found id: ""
	I1027 22:39:00.518219  682462 logs.go:282] 0 containers: []
	W1027 22:39:00.518231  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:39:00.518239  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:39:00.518310  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:39:00.550441  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:39:00.550464  682462 cri.go:89] found id: ""
	I1027 22:39:00.550475  682462 logs.go:282] 1 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387]
	I1027 22:39:00.550636  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:39:00.555263  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:39:00.555344  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:39:00.586736  682462 cri.go:89] found id: ""
	I1027 22:39:00.586766  682462 logs.go:282] 0 containers: []
	W1027 22:39:00.586779  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:39:00.586787  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:39:00.586848  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:39:00.617722  682462 cri.go:89] found id: ""
	I1027 22:39:00.617753  682462 logs.go:282] 0 containers: []
	W1027 22:39:00.617766  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:39:00.617779  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:39:00.617796  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:39:00.690025  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:39:00.690070  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:39:00.729658  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:39:00.729688  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:39:00.848017  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:39:00.848050  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:39:00.871362  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:39:00.871399  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:39:00.942055  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:39:00.942083  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:39:00.942101  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:39:00.985051  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:39:00.985093  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:39:01.064157  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:39:01.064193  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:39:03.604032  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:39:03.604564  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:39:03.604677  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:39:03.604768  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:39:03.639633  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:39:03.639664  682462 cri.go:89] found id: ""
	I1027 22:39:03.639676  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:39:03.639737  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:39:03.645089  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:39:03.645171  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:39:03.675496  682462 cri.go:89] found id: ""
	I1027 22:39:03.675528  682462 logs.go:282] 0 containers: []
	W1027 22:39:03.675541  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:39:03.675549  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:39:03.675607  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:39:03.706836  682462 cri.go:89] found id: ""
	I1027 22:39:03.706861  682462 logs.go:282] 0 containers: []
	W1027 22:39:03.706869  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:39:03.706875  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:39:03.706926  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:39:03.740418  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:39:03.740441  682462 cri.go:89] found id: ""
	I1027 22:39:03.740456  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:39:03.740521  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:39:03.744736  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:39:03.744794  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:39:03.775257  682462 cri.go:89] found id: ""
	I1027 22:39:03.775289  682462 logs.go:282] 0 containers: []
	W1027 22:39:03.775300  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:39:03.775308  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:39:03.775380  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:39:03.807176  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:39:03.807206  682462 cri.go:89] found id: ""
	I1027 22:39:03.807216  682462 logs.go:282] 1 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387]
	I1027 22:39:03.807281  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:39:03.811852  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:39:03.811926  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:39:03.842232  682462 cri.go:89] found id: ""
	I1027 22:39:03.842263  682462 logs.go:282] 0 containers: []
	W1027 22:39:03.842274  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:39:03.842281  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:39:03.842346  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:39:03.875031  682462 cri.go:89] found id: ""
	I1027 22:39:03.875060  682462 logs.go:282] 0 containers: []
	W1027 22:39:03.875069  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:39:03.875086  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:39:03.875100  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:39:03.910508  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:39:03.910557  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:39:04.019809  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:39:04.019864  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:39:04.040188  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:39:04.040224  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:39:04.105984  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:39:04.106013  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:39:04.106031  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:39:04.140294  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:39:04.140333  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:39:04.201950  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:39:04.201990  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:39:04.231502  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:39:04.231529  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:39:03.898070  734045 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-927034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:39:03.919480  734045 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1027 22:39:03.924344  734045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:03.936805  734045 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-927034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:39:03.936992  734045 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:03.937071  734045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:03.973420  734045 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:03.973442  734045 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:39:03.973494  734045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:04.004274  734045 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:04.004301  734045 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:39:04.004310  734045 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1027 22:39:04.004424  734045 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-927034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:39:04.004523  734045 ssh_runner.go:195] Run: crio config
	I1027 22:39:04.055580  734045 cni.go:84] Creating CNI manager for ""
	I1027 22:39:04.055606  734045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:04.055628  734045 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:39:04.055651  734045 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-927034 NodeName:default-k8s-diff-port-927034 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:39:04.055781  734045 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-927034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:39:04.055845  734045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:39:04.066231  734045 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:39:04.066310  734045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:39:04.076408  734045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1027 22:39:04.092837  734045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:39:04.111050  734045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1027 22:39:04.124989  734045 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:39:04.129228  734045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:04.140369  734045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:04.234132  734045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:39:04.263625  734045 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034 for IP: 192.168.103.2
	I1027 22:39:04.263645  734045 certs.go:195] generating shared ca certs ...
	I1027 22:39:04.263663  734045 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:04.263834  734045 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:39:04.263895  734045 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:39:04.263908  734045 certs.go:257] generating profile certs ...
	I1027 22:39:04.264001  734045 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/client.key
	I1027 22:39:04.264020  734045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/client.crt with IP's: []
	I1027 22:39:04.723787  734045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/client.crt ...
	I1027 22:39:04.723816  734045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/client.crt: {Name:mk9ad34e4580da0dcc35af2021888a4fd74108ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:04.724003  734045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/client.key ...
	I1027 22:39:04.724017  734045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/client.key: {Name:mk3c301ed3c63ea87cdcdada7dca53a8027d21dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:04.724101  734045 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.key.7445d1ac
	I1027 22:39:04.724118  734045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.crt.7445d1ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1027 22:39:05.013796  734045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.crt.7445d1ac ...
	I1027 22:39:05.013827  734045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.crt.7445d1ac: {Name:mk0f46bf1b1b08ef370aa6a3edd73a92951976d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:05.014028  734045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.key.7445d1ac ...
	I1027 22:39:05.014048  734045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.key.7445d1ac: {Name:mka824d394a2f2ad24ef1005c97442c5aad1f9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:05.014152  734045 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.crt.7445d1ac -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.crt
	I1027 22:39:05.014280  734045 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.key.7445d1ac -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.key
	I1027 22:39:05.014374  734045 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/proxy-client.key
	I1027 22:39:05.014403  734045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/proxy-client.crt with IP's: []
	I1027 22:39:05.369073  734045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/proxy-client.crt ...
	I1027 22:39:05.369111  734045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/proxy-client.crt: {Name:mk4083027b7312d5d4cd8ebf2f717857fd94fd37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:05.369324  734045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/proxy-client.key ...
	I1027 22:39:05.369348  734045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/proxy-client.key: {Name:mkc8ab8c5b9eed2ebec6d356560ccc36a225fe92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:05.369598  734045 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:39:05.369653  734045 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:39:05.369667  734045 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:39:05.369704  734045 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:39:05.369762  734045 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:39:05.369800  734045 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:39:05.369866  734045 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:05.370540  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:39:05.389668  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:39:05.407683  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:39:05.426764  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:39:05.444171  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1027 22:39:05.462380  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:39:05.480136  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:39:05.497938  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/default-k8s-diff-port-927034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:39:05.515528  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:39:05.534480  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:39:05.552406  734045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:39:05.570073  734045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:39:05.583082  734045 ssh_runner.go:195] Run: openssl version
	I1027 22:39:05.589760  734045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:39:05.598348  734045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:39:05.602427  734045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:39:05.602479  734045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:39:05.637632  734045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:39:05.647303  734045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:39:05.656536  734045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:39:05.660387  734045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:39:05.660440  734045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:39:05.700998  734045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:39:05.710978  734045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:39:05.720182  734045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:05.724489  734045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:05.724550  734045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:05.759012  734045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:39:05.768272  734045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:39:05.772262  734045 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:39:05.772326  734045 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-927034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927034 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:39:05.772400  734045 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:39:05.772487  734045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:39:05.804220  734045 cri.go:89] found id: ""
	I1027 22:39:05.804292  734045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:39:05.812892  734045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:39:05.821070  734045 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:39:05.821127  734045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:39:05.829154  734045 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:39:05.829171  734045 kubeadm.go:158] found existing configuration files:
	
	I1027 22:39:05.829219  734045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1027 22:39:05.836623  734045 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:39:05.836676  734045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:39:05.843975  734045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1027 22:39:05.851389  734045 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:39:05.851449  734045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:39:05.858600  734045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1027 22:39:05.866091  734045 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:39:05.866143  734045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:39:05.873370  734045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1027 22:39:05.880653  734045 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:39:05.880699  734045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:39:05.887985  734045 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:39:05.924155  734045 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:39:05.924238  734045 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:39:05.944063  734045 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:39:05.944146  734045 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:39:05.944206  734045 kubeadm.go:319] OS: Linux
	I1027 22:39:05.944271  734045 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:39:05.944352  734045 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:39:05.944432  734045 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:39:05.944514  734045 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:39:05.944585  734045 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:39:05.944652  734045 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:39:05.944725  734045 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:39:05.944797  734045 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:39:06.004174  734045 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:39:06.004352  734045 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:39:06.004488  734045 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:39:06.011749  734045 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1027 22:39:06.548626  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	W1027 22:39:09.047925  726897 pod_ready.go:104] pod "coredns-66bc5c9577-m8lfc" is not "Ready", error: <nil>
	I1027 22:39:06.805700  682462 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:39:06.806178  682462 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 22:39:06.806242  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 22:39:06.806303  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 22:39:06.835596  682462 cri.go:89] found id: "b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:39:06.835620  682462 cri.go:89] found id: ""
	I1027 22:39:06.835630  682462 logs.go:282] 1 containers: [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810]
	I1027 22:39:06.835691  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:39:06.839982  682462 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 22:39:06.840046  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 22:39:06.871603  682462 cri.go:89] found id: ""
	I1027 22:39:06.871626  682462 logs.go:282] 0 containers: []
	W1027 22:39:06.871634  682462 logs.go:284] No container was found matching "etcd"
	I1027 22:39:06.871643  682462 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 22:39:06.871706  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 22:39:06.900324  682462 cri.go:89] found id: ""
	I1027 22:39:06.900354  682462 logs.go:282] 0 containers: []
	W1027 22:39:06.900366  682462 logs.go:284] No container was found matching "coredns"
	I1027 22:39:06.900373  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 22:39:06.900428  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 22:39:06.928183  682462 cri.go:89] found id: "1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:39:06.928210  682462 cri.go:89] found id: ""
	I1027 22:39:06.928221  682462 logs.go:282] 1 containers: [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44]
	I1027 22:39:06.928279  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:39:06.932405  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 22:39:06.932476  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 22:39:06.964220  682462 cri.go:89] found id: ""
	I1027 22:39:06.964250  682462 logs.go:282] 0 containers: []
	W1027 22:39:06.964259  682462 logs.go:284] No container was found matching "kube-proxy"
	I1027 22:39:06.964267  682462 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 22:39:06.964355  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 22:39:06.991853  682462 cri.go:89] found id: "059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:39:06.991876  682462 cri.go:89] found id: ""
	I1027 22:39:06.991884  682462 logs.go:282] 1 containers: [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387]
	I1027 22:39:06.991932  682462 ssh_runner.go:195] Run: which crictl
	I1027 22:39:06.995894  682462 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 22:39:06.995978  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 22:39:07.023166  682462 cri.go:89] found id: ""
	I1027 22:39:07.023193  682462 logs.go:282] 0 containers: []
	W1027 22:39:07.023202  682462 logs.go:284] No container was found matching "kindnet"
	I1027 22:39:07.023210  682462 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 22:39:07.023272  682462 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 22:39:07.049309  682462 cri.go:89] found id: ""
	I1027 22:39:07.049339  682462 logs.go:282] 0 containers: []
	W1027 22:39:07.049349  682462 logs.go:284] No container was found matching "storage-provisioner"
	I1027 22:39:07.049359  682462 logs.go:123] Gathering logs for kube-controller-manager [059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387] ...
	I1027 22:39:07.049376  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 059eaa1b0b594efe8d1fa1d1d6e99630102d6565ea6298f39659f16b604a8387"
	I1027 22:39:07.077243  682462 logs.go:123] Gathering logs for CRI-O ...
	I1027 22:39:07.077273  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 22:39:07.135136  682462 logs.go:123] Gathering logs for container status ...
	I1027 22:39:07.135169  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 22:39:07.164755  682462 logs.go:123] Gathering logs for kubelet ...
	I1027 22:39:07.164780  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 22:39:07.263312  682462 logs.go:123] Gathering logs for dmesg ...
	I1027 22:39:07.263345  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 22:39:07.281844  682462 logs.go:123] Gathering logs for describe nodes ...
	I1027 22:39:07.281868  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 22:39:07.341747  682462 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 22:39:07.341765  682462 logs.go:123] Gathering logs for kube-apiserver [b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810] ...
	I1027 22:39:07.341780  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8994f7cfc6260aff05b52f6cee2ccfcaf53868cabdbd58016e3d17f9c605810"
	I1027 22:39:07.374879  682462 logs.go:123] Gathering logs for kube-scheduler [1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44] ...
	I1027 22:39:07.374907  682462 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1692e1dd94587dff3b8f195cff1c69d57700b5c2999bbfaa3309261d613e9e44"
	I1027 22:39:06.013456  734045 out.go:252]   - Generating certificates and keys ...
	I1027 22:39:06.013555  734045 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:39:06.013648  734045 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:39:06.129972  734045 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:39:06.207704  734045 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:39:06.236930  734045 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:39:06.555981  734045 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:39:06.982583  734045 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:39:06.982791  734045 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-927034 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1027 22:39:07.299893  734045 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:39:07.300179  734045 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-927034 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1027 22:39:08.290580  734045 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:39:08.585707  734045 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:39:08.860178  734045 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:39:08.860319  734045 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:39:09.225094  734045 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:39:09.447779  734045 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:39:09.741767  734045 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:39:09.957778  734045 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:39:10.152031  734045 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:39:10.152810  734045 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:39:10.158556  734045 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 27 22:38:59 embed-certs-829976 crio[782]: time="2025-10-27T22:38:59.818585898Z" level=info msg="Started container" PID=1851 containerID=4c72acf7d91daebe224778f2639d2ebb55b33d3a9588759e47d922191eeadaf9 description=kube-system/storage-provisioner/storage-provisioner id=ddcf1c6d-d92d-4e10-904a-3ecbaa25c8ea name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a3a2f9afcec74c7f2ddc765afc63121bb762da21f4fa8fe55a6ca1969cca6b2
	Oct 27 22:38:59 embed-certs-829976 crio[782]: time="2025-10-27T22:38:59.831588179Z" level=info msg="Started container" PID=1850 containerID=551cee491d2a92ed04d090a677143f0de1902c45550cb5b6f1664583ad2b7d87 description=kube-system/coredns-66bc5c9577-msbj9/coredns id=7ec05a5a-a5fc-4078-9bef-745060a34430 name=/runtime.v1.RuntimeService/StartContainer sandboxID=62dc49fd973074133e1f20780ba4fa73c92fde3df35344846b009af28c5715ec
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.952987366Z" level=info msg="Running pod sandbox: default/busybox/POD" id=91cdea80-2c6a-4d0d-8245-95bf7b72e30c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.953101227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.95824714Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:365ce352cb4208b0f9d384e838371d0b507bea5fd63138e19f3917ada2bb7b35 UID:f694dbe2-ee8d-4ba0-9699-55c971369055 NetNS:/var/run/netns/a247d37a-9c84-4fd3-aace-92c7e0f0afd8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000334448}] Aliases:map[]}"
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.958277165Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.968722163Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:365ce352cb4208b0f9d384e838371d0b507bea5fd63138e19f3917ada2bb7b35 UID:f694dbe2-ee8d-4ba0-9699-55c971369055 NetNS:/var/run/netns/a247d37a-9c84-4fd3-aace-92c7e0f0afd8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000334448}] Aliases:map[]}"
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.96886648Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.969806954Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.971098787Z" level=info msg="Ran pod sandbox 365ce352cb4208b0f9d384e838371d0b507bea5fd63138e19f3917ada2bb7b35 with infra container: default/busybox/POD" id=91cdea80-2c6a-4d0d-8245-95bf7b72e30c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.972449097Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bd2ee5bf-4e39-4f36-809d-e38cf3e7e1f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.972611017Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=bd2ee5bf-4e39-4f36-809d-e38cf3e7e1f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.972652096Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=bd2ee5bf-4e39-4f36-809d-e38cf3e7e1f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.973492564Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=10e4d6c1-afc9-438a-a3d4-96cd27324efe name=/runtime.v1.ImageService/PullImage
	Oct 27 22:39:02 embed-certs-829976 crio[782]: time="2025-10-27T22:39:02.976407908Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 22:39:05 embed-certs-829976 crio[782]: time="2025-10-27T22:39:05.168758422Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=10e4d6c1-afc9-438a-a3d4-96cd27324efe name=/runtime.v1.ImageService/PullImage
	Oct 27 22:39:05 embed-certs-829976 crio[782]: time="2025-10-27T22:39:05.169580459Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=72d29521-b495-451e-9880-079805bceec3 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:05 embed-certs-829976 crio[782]: time="2025-10-27T22:39:05.171136459Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8c3f7769-ae96-483f-b88d-847175d08c02 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:05 embed-certs-829976 crio[782]: time="2025-10-27T22:39:05.174428009Z" level=info msg="Creating container: default/busybox/busybox" id=fc9e98e3-61d5-4017-9e80-38531224505d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:39:05 embed-certs-829976 crio[782]: time="2025-10-27T22:39:05.174538087Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:05 embed-certs-829976 crio[782]: time="2025-10-27T22:39:05.177826091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:05 embed-certs-829976 crio[782]: time="2025-10-27T22:39:05.178258004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:05 embed-certs-829976 crio[782]: time="2025-10-27T22:39:05.214550471Z" level=info msg="Created container 33065d54fe7416f65e95b7a4ec8193a22b6b0e572a38ba2964b6892c6ac6f004: default/busybox/busybox" id=fc9e98e3-61d5-4017-9e80-38531224505d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:39:05 embed-certs-829976 crio[782]: time="2025-10-27T22:39:05.215221698Z" level=info msg="Starting container: 33065d54fe7416f65e95b7a4ec8193a22b6b0e572a38ba2964b6892c6ac6f004" id=20f49ab2-a6b8-49c7-b7fe-a7976993bdde name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:39:05 embed-certs-829976 crio[782]: time="2025-10-27T22:39:05.216993808Z" level=info msg="Started container" PID=1924 containerID=33065d54fe7416f65e95b7a4ec8193a22b6b0e572a38ba2964b6892c6ac6f004 description=default/busybox/busybox id=20f49ab2-a6b8-49c7-b7fe-a7976993bdde name=/runtime.v1.RuntimeService/StartContainer sandboxID=365ce352cb4208b0f9d384e838371d0b507bea5fd63138e19f3917ada2bb7b35
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	33065d54fe741       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   365ce352cb420       busybox                                      default
	4c72acf7d91da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   2a3a2f9afcec7       storage-provisioner                          kube-system
	551cee491d2a9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   62dc49fd97307       coredns-66bc5c9577-msbj9                     kube-system
	5cb2fd976978f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   18f3e7a617898       kindnet-dtjql                                kube-system
	94a9d9a6a9a9c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   41f4441e44672       kube-proxy-gf725                             kube-system
	f6ddef72c680b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   48eeece365d27       kube-controller-manager-embed-certs-829976   kube-system
	3286189dcca6b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   883a3d423cd35       kube-scheduler-embed-certs-829976            kube-system
	978c4203647bf       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   225f03ee47c18       kube-apiserver-embed-certs-829976            kube-system
	f6c23b162cec4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   98b2478929eb0       etcd-embed-certs-829976                      kube-system
	
	
	==> coredns [551cee491d2a92ed04d090a677143f0de1902c45550cb5b6f1664583ad2b7d87] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34553 - 39842 "HINFO IN 6677562302719735303.91970172464342595. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.042081164s
	
	
	==> describe nodes <==
	Name:               embed-certs-829976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-829976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=embed-certs-829976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_38_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:38:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-829976
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:39:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:39:12 +0000   Mon, 27 Oct 2025 22:38:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:39:12 +0000   Mon, 27 Oct 2025 22:38:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:39:12 +0000   Mon, 27 Oct 2025 22:38:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:39:12 +0000   Mon, 27 Oct 2025 22:38:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-829976
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                3b5f0575-3075-4eff-8d0c-0490f489999a
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-msbj9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-829976                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-dtjql                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-829976             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-829976    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-gf725                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-829976             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node embed-certs-829976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node embed-certs-829976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node embed-certs-829976 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node embed-certs-829976 event: Registered Node embed-certs-829976 in Controller
	  Normal  NodeReady                15s   kubelet          Node embed-certs-829976 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [f6c23b162cec484a0acb07ed6b31019d7ed96001468258cd3c62cc51a51eab4c] <==
	{"level":"warn","ts":"2025-10-27T22:38:38.350542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.366371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.375654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.384241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.401597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.411238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.421464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.430819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.439032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.447721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.466927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.485092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.502184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.511345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.520223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.621289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:58.923603Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.532865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-829976\" limit:1 ","response":"range_response_count:1 size:5465"}
	{"level":"info","ts":"2025-10-27T22:38:58.923725Z","caller":"traceutil/trace.go:172","msg":"trace[449959035] range","detail":"{range_begin:/registry/minions/embed-certs-829976; range_end:; response_count:1; response_revision:392; }","duration":"184.669014ms","start":"2025-10-27T22:38:58.739039Z","end":"2025-10-27T22:38:58.923708Z","steps":["trace[449959035] 'agreement among raft nodes before linearized reading'  (duration: 99.68337ms)","trace[449959035] 'range keys from in-memory index tree'  (duration: 84.799461ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:38:58.923665Z","caller":"traceutil/trace.go:172","msg":"trace[1995923139] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"213.867894ms","start":"2025-10-27T22:38:58.709774Z","end":"2025-10-27T22:38:58.923641Z","steps":["trace[1995923139] 'process raft request'  (duration: 128.981703ms)","trace[1995923139] 'compare'  (duration: 84.775476ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T22:38:59.184919Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.261707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T22:38:59.185007Z","caller":"traceutil/trace.go:172","msg":"trace[511363169] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:395; }","duration":"179.361437ms","start":"2025-10-27T22:38:59.005629Z","end":"2025-10-27T22:38:59.184991Z","steps":["trace[511363169] 'agreement among raft nodes before linearized reading'  (duration: 56.747615ms)","trace[511363169] 'range keys from in-memory index tree'  (duration: 122.482355ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T22:38:59.185511Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.626762ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596666393534425 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-msbj9.18727a2d3ecb2d98\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-msbj9.18727a2d3ecb2d98\" value_size:641 lease:499224629538758206 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T22:38:59.185625Z","caller":"traceutil/trace.go:172","msg":"trace[2039420295] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"182.786641ms","start":"2025-10-27T22:38:59.002813Z","end":"2025-10-27T22:38:59.185600Z","steps":["trace[2039420295] 'process raft request'  (duration: 59.582247ms)","trace[2039420295] 'compare'  (duration: 122.50746ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:38:59.318337Z","caller":"traceutil/trace.go:172","msg":"trace[656479742] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"126.559545ms","start":"2025-10-27T22:38:59.191754Z","end":"2025-10-27T22:38:59.318313Z","steps":["trace[656479742] 'process raft request'  (duration: 117.286217ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:38:59.729722Z","caller":"traceutil/trace.go:172","msg":"trace[1408079125] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"151.678706ms","start":"2025-10-27T22:38:59.578021Z","end":"2025-10-27T22:38:59.729700Z","steps":["trace[1408079125] 'process raft request'  (duration: 68.072705ms)","trace[1408079125] 'compare'  (duration: 83.480291ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:39:13 up  2:21,  0 user,  load average: 3.78, 2.69, 2.74
	Linux embed-certs-829976 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5cb2fd976978fb99afb5be382700e01cd72a1625bc1213ef9684ca6c9dd1ccb3] <==
	I1027 22:38:47.877938       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:38:47.878291       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 22:38:47.878536       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:38:47.878555       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:38:47.878587       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:38:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1027 22:38:48.179482       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 22:38:48.179801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1027 22:38:48.179826       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:38:48.179873       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:38:48.179902       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1027 22:38:48.180243       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1027 22:38:48.278762       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:38:49.778734       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:38:49.778764       1 metrics.go:72] Registering metrics
	I1027 22:38:49.778827       1 controller.go:711] "Syncing nftables rules"
	I1027 22:38:58.087082       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:38:58.087122       1 main.go:301] handling current node
	I1027 22:39:08.088079       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:39:08.088123       1 main.go:301] handling current node
	
	
	==> kube-apiserver [978c4203647bf4fdc2bbf02dd2aba0948fadcf94f94260fb216caf17e8798a4b] <==
	I1027 22:38:39.327449       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 22:38:39.327698       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 22:38:39.327758       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:38:39.327851       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:38:39.332577       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:38:39.334284       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 22:38:39.524493       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:38:40.229459       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 22:38:40.233758       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 22:38:40.233779       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:38:40.777449       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:38:40.817569       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:38:40.932596       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 22:38:40.942430       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 22:38:40.944001       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:38:40.949544       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:38:41.268695       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:38:41.722725       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:38:41.732413       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 22:38:41.741226       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:38:46.923239       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:38:46.928087       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:38:47.071108       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:38:47.168860       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1027 22:39:11.763883       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:37264: use of closed network connection
	
	
	==> kube-controller-manager [f6ddef72c680bd6119b27c5e8e80572cc48b3c34ebb2a2ffe04d3c5079b3d3b3] <==
	I1027 22:38:46.266856       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 22:38:46.266863       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:38:46.267147       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:38:46.267264       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-829976"
	I1027 22:38:46.267289       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 22:38:46.267319       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 22:38:46.266880       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:38:46.267583       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 22:38:46.267822       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:38:46.267880       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 22:38:46.269780       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:38:46.269972       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:38:46.272141       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:38:46.272168       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 22:38:46.272239       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:38:46.272297       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:38:46.272317       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:38:46.272325       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:38:46.273195       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 22:38:46.279121       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:38:46.281227       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 22:38:46.281282       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-829976" podCIDRs=["10.244.0.0/24"]
	I1027 22:38:46.288396       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 22:38:46.288415       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:39:01.269740       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [94a9d9a6a9a9cc0b826957ee006842f7ed0e30dafa6b45fe8300c0c29a80d1c5] <==
	I1027 22:38:47.752060       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:38:47.846748       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:38:47.947683       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:38:47.947722       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 22:38:47.947836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:38:47.979556       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:38:47.979625       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:38:47.988370       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:38:47.988843       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:38:47.988886       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:38:47.990633       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:38:47.990661       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:38:47.990717       1 config.go:200] "Starting service config controller"
	I1027 22:38:47.990724       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:38:47.990786       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:38:47.990793       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:38:47.991828       1 config.go:309] "Starting node config controller"
	I1027 22:38:47.991914       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:38:47.991928       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:38:48.091202       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:38:48.091296       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:38:48.091355       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3286189dcca6ba5593a440d52fb8fd16f153afe8ea20d331a59ff00f93d9e459] <==
	E1027 22:38:39.316714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:38:39.316755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:38:39.316810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:38:39.316854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:38:39.316887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:38:39.316908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:38:39.316868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:38:39.317035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:38:40.125583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:38:40.153242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:38:40.234914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:38:40.299328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:38:40.354495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:38:40.360715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:38:40.386937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:38:40.398193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:38:40.427734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:38:40.444043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:38:40.477268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:38:40.504613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:38:40.517172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:38:40.517318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 22:38:40.536171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:38:40.569428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1027 22:38:42.210884       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:38:42 embed-certs-829976 kubelet[1321]: E1027 22:38:42.621617    1321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-829976\" already exists" pod="kube-system/kube-apiserver-embed-certs-829976"
	Oct 27 22:38:42 embed-certs-829976 kubelet[1321]: I1027 22:38:42.637205    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-829976" podStartSLOduration=1.6371537790000001 podStartE2EDuration="1.637153779s" podCreationTimestamp="2025-10-27 22:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:38:42.636868545 +0000 UTC m=+1.149498196" watchObservedRunningTime="2025-10-27 22:38:42.637153779 +0000 UTC m=+1.149783424"
	Oct 27 22:38:42 embed-certs-829976 kubelet[1321]: I1027 22:38:42.656668    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-829976" podStartSLOduration=1.656644736 podStartE2EDuration="1.656644736s" podCreationTimestamp="2025-10-27 22:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:38:42.647181253 +0000 UTC m=+1.159810906" watchObservedRunningTime="2025-10-27 22:38:42.656644736 +0000 UTC m=+1.169274437"
	Oct 27 22:38:42 embed-certs-829976 kubelet[1321]: I1027 22:38:42.656830    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-829976" podStartSLOduration=1.656817937 podStartE2EDuration="1.656817937s" podCreationTimestamp="2025-10-27 22:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:38:42.656798066 +0000 UTC m=+1.169427728" watchObservedRunningTime="2025-10-27 22:38:42.656817937 +0000 UTC m=+1.169447605"
	Oct 27 22:38:42 embed-certs-829976 kubelet[1321]: I1027 22:38:42.681808    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-829976" podStartSLOduration=1.681772084 podStartE2EDuration="1.681772084s" podCreationTimestamp="2025-10-27 22:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:38:42.670496212 +0000 UTC m=+1.183125866" watchObservedRunningTime="2025-10-27 22:38:42.681772084 +0000 UTC m=+1.194401739"
	Oct 27 22:38:46 embed-certs-829976 kubelet[1321]: I1027 22:38:46.337312    1321 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 22:38:46 embed-certs-829976 kubelet[1321]: I1027 22:38:46.338139    1321 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 22:38:47 embed-certs-829976 kubelet[1321]: I1027 22:38:47.208231    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e75d998-47cc-4e2c-b1d2-7b6069c821f8-lib-modules\") pod \"kindnet-dtjql\" (UID: \"8e75d998-47cc-4e2c-b1d2-7b6069c821f8\") " pod="kube-system/kindnet-dtjql"
	Oct 27 22:38:47 embed-certs-829976 kubelet[1321]: I1027 22:38:47.208283    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3751b38d-bae6-4ea8-9669-346eb3fd7457-kube-proxy\") pod \"kube-proxy-gf725\" (UID: \"3751b38d-bae6-4ea8-9669-346eb3fd7457\") " pod="kube-system/kube-proxy-gf725"
	Oct 27 22:38:47 embed-certs-829976 kubelet[1321]: I1027 22:38:47.208369    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8mx\" (UniqueName: \"kubernetes.io/projected/3751b38d-bae6-4ea8-9669-346eb3fd7457-kube-api-access-bs8mx\") pod \"kube-proxy-gf725\" (UID: \"3751b38d-bae6-4ea8-9669-346eb3fd7457\") " pod="kube-system/kube-proxy-gf725"
	Oct 27 22:38:47 embed-certs-829976 kubelet[1321]: I1027 22:38:47.208421    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3751b38d-bae6-4ea8-9669-346eb3fd7457-xtables-lock\") pod \"kube-proxy-gf725\" (UID: \"3751b38d-bae6-4ea8-9669-346eb3fd7457\") " pod="kube-system/kube-proxy-gf725"
	Oct 27 22:38:47 embed-certs-829976 kubelet[1321]: I1027 22:38:47.208446    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3751b38d-bae6-4ea8-9669-346eb3fd7457-lib-modules\") pod \"kube-proxy-gf725\" (UID: \"3751b38d-bae6-4ea8-9669-346eb3fd7457\") " pod="kube-system/kube-proxy-gf725"
	Oct 27 22:38:47 embed-certs-829976 kubelet[1321]: I1027 22:38:47.208466    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvlvs\" (UniqueName: \"kubernetes.io/projected/8e75d998-47cc-4e2c-b1d2-7b6069c821f8-kube-api-access-wvlvs\") pod \"kindnet-dtjql\" (UID: \"8e75d998-47cc-4e2c-b1d2-7b6069c821f8\") " pod="kube-system/kindnet-dtjql"
	Oct 27 22:38:47 embed-certs-829976 kubelet[1321]: I1027 22:38:47.208489    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8e75d998-47cc-4e2c-b1d2-7b6069c821f8-cni-cfg\") pod \"kindnet-dtjql\" (UID: \"8e75d998-47cc-4e2c-b1d2-7b6069c821f8\") " pod="kube-system/kindnet-dtjql"
	Oct 27 22:38:47 embed-certs-829976 kubelet[1321]: I1027 22:38:47.208514    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e75d998-47cc-4e2c-b1d2-7b6069c821f8-xtables-lock\") pod \"kindnet-dtjql\" (UID: \"8e75d998-47cc-4e2c-b1d2-7b6069c821f8\") " pod="kube-system/kindnet-dtjql"
	Oct 27 22:38:47 embed-certs-829976 kubelet[1321]: I1027 22:38:47.662484    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dtjql" podStartSLOduration=0.662439285 podStartE2EDuration="662.439285ms" podCreationTimestamp="2025-10-27 22:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:38:47.661272051 +0000 UTC m=+6.173901704" watchObservedRunningTime="2025-10-27 22:38:47.662439285 +0000 UTC m=+6.175068936"
	Oct 27 22:38:48 embed-certs-829976 kubelet[1321]: I1027 22:38:48.683313    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gf725" podStartSLOduration=1.683286711 podStartE2EDuration="1.683286711s" podCreationTimestamp="2025-10-27 22:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:38:47.708785108 +0000 UTC m=+6.221414767" watchObservedRunningTime="2025-10-27 22:38:48.683286711 +0000 UTC m=+7.195916364"
	Oct 27 22:38:58 embed-certs-829976 kubelet[1321]: I1027 22:38:58.639558    1321 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 22:38:59 embed-certs-829976 kubelet[1321]: I1027 22:38:59.199519    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eabc58bc-8437-422d-bed2-b0d37d4bb14b-config-volume\") pod \"coredns-66bc5c9577-msbj9\" (UID: \"eabc58bc-8437-422d-bed2-b0d37d4bb14b\") " pod="kube-system/coredns-66bc5c9577-msbj9"
	Oct 27 22:38:59 embed-certs-829976 kubelet[1321]: I1027 22:38:59.199567    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swkl8\" (UniqueName: \"kubernetes.io/projected/eabc58bc-8437-422d-bed2-b0d37d4bb14b-kube-api-access-swkl8\") pod \"coredns-66bc5c9577-msbj9\" (UID: \"eabc58bc-8437-422d-bed2-b0d37d4bb14b\") " pod="kube-system/coredns-66bc5c9577-msbj9"
	Oct 27 22:38:59 embed-certs-829976 kubelet[1321]: I1027 22:38:59.299800    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fcbb9eb6-2144-438f-abf4-a4bd189f88f7-tmp\") pod \"storage-provisioner\" (UID: \"fcbb9eb6-2144-438f-abf4-a4bd189f88f7\") " pod="kube-system/storage-provisioner"
	Oct 27 22:38:59 embed-certs-829976 kubelet[1321]: I1027 22:38:59.299842    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzqbj\" (UniqueName: \"kubernetes.io/projected/fcbb9eb6-2144-438f-abf4-a4bd189f88f7-kube-api-access-kzqbj\") pod \"storage-provisioner\" (UID: \"fcbb9eb6-2144-438f-abf4-a4bd189f88f7\") " pod="kube-system/storage-provisioner"
	Oct 27 22:39:00 embed-certs-829976 kubelet[1321]: I1027 22:39:00.690500    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-msbj9" podStartSLOduration=13.690474882 podStartE2EDuration="13.690474882s" podCreationTimestamp="2025-10-27 22:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:00.690390727 +0000 UTC m=+19.203020379" watchObservedRunningTime="2025-10-27 22:39:00.690474882 +0000 UTC m=+19.203104535"
	Oct 27 22:39:00 embed-certs-829976 kubelet[1321]: I1027 22:39:00.690841    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.690823163 podStartE2EDuration="13.690823163s" podCreationTimestamp="2025-10-27 22:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:00.67775794 +0000 UTC m=+19.190387592" watchObservedRunningTime="2025-10-27 22:39:00.690823163 +0000 UTC m=+19.203452816"
	Oct 27 22:39:02 embed-certs-829976 kubelet[1321]: I1027 22:39:02.723726    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hr7w\" (UniqueName: \"kubernetes.io/projected/f694dbe2-ee8d-4ba0-9699-55c971369055-kube-api-access-9hr7w\") pod \"busybox\" (UID: \"f694dbe2-ee8d-4ba0-9699-55c971369055\") " pod="default/busybox"
	
	
	==> storage-provisioner [4c72acf7d91daebe224778f2639d2ebb55b33d3a9588759e47d922191eeadaf9] <==
	I1027 22:38:59.829921       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 22:38:59.838717       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 22:38:59.838759       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 22:38:59.900425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:59.967502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 22:38:59.967766       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 22:38:59.967876       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7c5079ea-db67-4e50-8ac0-354c5782f492", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-829976_d93ad2bb-2152-4192-ab6a-d4ed1350a636 became leader
	I1027 22:38:59.967972       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-829976_d93ad2bb-2152-4192-ab6a-d4ed1350a636!
	W1027 22:38:59.969828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:38:59.974717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 22:39:00.068246       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-829976_d93ad2bb-2152-4192-ab6a-d4ed1350a636!
	W1027 22:39:01.977825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:01.982554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:03.986229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:03.992055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:05.995830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:06.000225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:08.003008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:08.007730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:10.011645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:10.015664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:12.018884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:12.024541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-829976 -n embed-certs-829976
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-829976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-188814 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-188814 --alsologtostderr -v=1: exit status 80 (2.173427398s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-188814 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:39:26.779182  740076 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:39:26.779427  740076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:26.779437  740076 out.go:374] Setting ErrFile to fd 2...
	I1027 22:39:26.779440  740076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:26.779654  740076 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:39:26.779880  740076 out.go:368] Setting JSON to false
	I1027 22:39:26.779923  740076 mustload.go:66] Loading cluster: no-preload-188814
	I1027 22:39:26.780300  740076 config.go:182] Loaded profile config "no-preload-188814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:26.780683  740076 cli_runner.go:164] Run: docker container inspect no-preload-188814 --format={{.State.Status}}
	I1027 22:39:26.797110  740076 host.go:66] Checking if "no-preload-188814" exists ...
	I1027 22:39:26.797377  740076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:39:26.853289  740076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:false NGoroutines:90 SystemTime:2025-10-27 22:39:26.844185396 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:39:26.853886  740076 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-188814 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 22:39:26.855580  740076 out.go:179] * Pausing node no-preload-188814 ... 
	I1027 22:39:26.856426  740076 host.go:66] Checking if "no-preload-188814" exists ...
	I1027 22:39:26.856717  740076 ssh_runner.go:195] Run: systemctl --version
	I1027 22:39:26.856765  740076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-188814
	I1027 22:39:26.872987  740076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/no-preload-188814/id_rsa Username:docker}
	I1027 22:39:26.973179  740076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:39:26.985859  740076 pause.go:52] kubelet running: true
	I1027 22:39:26.985921  740076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:39:27.160057  740076 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:39:27.160169  740076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:39:27.228371  740076 cri.go:89] found id: "a29b5b5b38c62144f9600bfdf7a35a1afbb4a79f4066d872710ac5cc46b01177"
	I1027 22:39:27.228402  740076 cri.go:89] found id: "27919fedcb8feaa43c4a00ba37dfeb16c6adca323954d5ba6144478dd68929b0"
	I1027 22:39:27.228405  740076 cri.go:89] found id: "03b2ccc9d6b692141146e0afcaf3653fe9df218b37a3f09868f8fb07bbeeac91"
	I1027 22:39:27.228408  740076 cri.go:89] found id: "bf414ccda38a64edfa3182d7b6c18f2e34500e5bba5df0ab6392597ef8eabd7d"
	I1027 22:39:27.228411  740076 cri.go:89] found id: "b65d6d450bfe889472b2618326bf68f2932d48e1ad884af95ec5a48f72d99f28"
	I1027 22:39:27.228418  740076 cri.go:89] found id: "cb9c2393e547842667e6423cc2d69ddfd9af4a1579d9d9531bc90992a0e1b634"
	I1027 22:39:27.228421  740076 cri.go:89] found id: "221d83fbd903479a3c762233eb12a7ec04e14004807c2ce9ea61f8e212524c54"
	I1027 22:39:27.228423  740076 cri.go:89] found id: "002c10e5f271a370eae7e9ac4bbcfa8188b01c92b6b9cb7d034828d114167209"
	I1027 22:39:27.228426  740076 cri.go:89] found id: "da762329de2a8c6c1610d73b7afd01c216fefae715c921b854c125c03fe0ac85"
	I1027 22:39:27.228443  740076 cri.go:89] found id: "03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f"
	I1027 22:39:27.228446  740076 cri.go:89] found id: "d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd"
	I1027 22:39:27.228448  740076 cri.go:89] found id: ""
	I1027 22:39:27.228499  740076 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:39:27.241071  740076 retry.go:31] will retry after 294.944962ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:39:27Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:39:27.536422  740076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:39:27.549601  740076 pause.go:52] kubelet running: false
	I1027 22:39:27.549664  740076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:39:27.706018  740076 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:39:27.706125  740076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:39:27.777878  740076 cri.go:89] found id: "a29b5b5b38c62144f9600bfdf7a35a1afbb4a79f4066d872710ac5cc46b01177"
	I1027 22:39:27.777902  740076 cri.go:89] found id: "27919fedcb8feaa43c4a00ba37dfeb16c6adca323954d5ba6144478dd68929b0"
	I1027 22:39:27.777907  740076 cri.go:89] found id: "03b2ccc9d6b692141146e0afcaf3653fe9df218b37a3f09868f8fb07bbeeac91"
	I1027 22:39:27.777910  740076 cri.go:89] found id: "bf414ccda38a64edfa3182d7b6c18f2e34500e5bba5df0ab6392597ef8eabd7d"
	I1027 22:39:27.777913  740076 cri.go:89] found id: "b65d6d450bfe889472b2618326bf68f2932d48e1ad884af95ec5a48f72d99f28"
	I1027 22:39:27.777917  740076 cri.go:89] found id: "cb9c2393e547842667e6423cc2d69ddfd9af4a1579d9d9531bc90992a0e1b634"
	I1027 22:39:27.777921  740076 cri.go:89] found id: "221d83fbd903479a3c762233eb12a7ec04e14004807c2ce9ea61f8e212524c54"
	I1027 22:39:27.777926  740076 cri.go:89] found id: "002c10e5f271a370eae7e9ac4bbcfa8188b01c92b6b9cb7d034828d114167209"
	I1027 22:39:27.777930  740076 cri.go:89] found id: "da762329de2a8c6c1610d73b7afd01c216fefae715c921b854c125c03fe0ac85"
	I1027 22:39:27.777939  740076 cri.go:89] found id: "03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f"
	I1027 22:39:27.777971  740076 cri.go:89] found id: "d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd"
	I1027 22:39:27.777975  740076 cri.go:89] found id: ""
	I1027 22:39:27.778019  740076 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:39:27.790889  740076 retry.go:31] will retry after 224.529349ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:39:27Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:39:28.016153  740076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:39:28.030091  740076 pause.go:52] kubelet running: false
	I1027 22:39:28.030160  740076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:39:28.182575  740076 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:39:28.182657  740076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:39:28.262555  740076 cri.go:89] found id: "a29b5b5b38c62144f9600bfdf7a35a1afbb4a79f4066d872710ac5cc46b01177"
	I1027 22:39:28.262580  740076 cri.go:89] found id: "27919fedcb8feaa43c4a00ba37dfeb16c6adca323954d5ba6144478dd68929b0"
	I1027 22:39:28.262584  740076 cri.go:89] found id: "03b2ccc9d6b692141146e0afcaf3653fe9df218b37a3f09868f8fb07bbeeac91"
	I1027 22:39:28.262587  740076 cri.go:89] found id: "bf414ccda38a64edfa3182d7b6c18f2e34500e5bba5df0ab6392597ef8eabd7d"
	I1027 22:39:28.262590  740076 cri.go:89] found id: "b65d6d450bfe889472b2618326bf68f2932d48e1ad884af95ec5a48f72d99f28"
	I1027 22:39:28.262593  740076 cri.go:89] found id: "cb9c2393e547842667e6423cc2d69ddfd9af4a1579d9d9531bc90992a0e1b634"
	I1027 22:39:28.262595  740076 cri.go:89] found id: "221d83fbd903479a3c762233eb12a7ec04e14004807c2ce9ea61f8e212524c54"
	I1027 22:39:28.262597  740076 cri.go:89] found id: "002c10e5f271a370eae7e9ac4bbcfa8188b01c92b6b9cb7d034828d114167209"
	I1027 22:39:28.262600  740076 cri.go:89] found id: "da762329de2a8c6c1610d73b7afd01c216fefae715c921b854c125c03fe0ac85"
	I1027 22:39:28.262617  740076 cri.go:89] found id: "03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f"
	I1027 22:39:28.262621  740076 cri.go:89] found id: "d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd"
	I1027 22:39:28.262623  740076 cri.go:89] found id: ""
	I1027 22:39:28.262684  740076 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:39:28.274752  740076 retry.go:31] will retry after 342.344233ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:39:28Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:39:28.618091  740076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:39:28.634414  740076 pause.go:52] kubelet running: false
	I1027 22:39:28.634487  740076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:39:28.796648  740076 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:39:28.796732  740076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:39:28.868577  740076 cri.go:89] found id: "a29b5b5b38c62144f9600bfdf7a35a1afbb4a79f4066d872710ac5cc46b01177"
	I1027 22:39:28.868599  740076 cri.go:89] found id: "27919fedcb8feaa43c4a00ba37dfeb16c6adca323954d5ba6144478dd68929b0"
	I1027 22:39:28.868606  740076 cri.go:89] found id: "03b2ccc9d6b692141146e0afcaf3653fe9df218b37a3f09868f8fb07bbeeac91"
	I1027 22:39:28.868612  740076 cri.go:89] found id: "bf414ccda38a64edfa3182d7b6c18f2e34500e5bba5df0ab6392597ef8eabd7d"
	I1027 22:39:28.868616  740076 cri.go:89] found id: "b65d6d450bfe889472b2618326bf68f2932d48e1ad884af95ec5a48f72d99f28"
	I1027 22:39:28.868621  740076 cri.go:89] found id: "cb9c2393e547842667e6423cc2d69ddfd9af4a1579d9d9531bc90992a0e1b634"
	I1027 22:39:28.868625  740076 cri.go:89] found id: "221d83fbd903479a3c762233eb12a7ec04e14004807c2ce9ea61f8e212524c54"
	I1027 22:39:28.868643  740076 cri.go:89] found id: "002c10e5f271a370eae7e9ac4bbcfa8188b01c92b6b9cb7d034828d114167209"
	I1027 22:39:28.868647  740076 cri.go:89] found id: "da762329de2a8c6c1610d73b7afd01c216fefae715c921b854c125c03fe0ac85"
	I1027 22:39:28.868655  740076 cri.go:89] found id: "03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f"
	I1027 22:39:28.868659  740076 cri.go:89] found id: "d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd"
	I1027 22:39:28.868663  740076 cri.go:89] found id: ""
	I1027 22:39:28.868703  740076 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:39:28.882686  740076 out.go:203] 
	W1027 22:39:28.884152  740076 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:39:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:39:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:39:28.884180  740076 out.go:285] * 
	* 
	W1027 22:39:28.889656  740076 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:39:28.890784  740076 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-188814 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-188814
helpers_test.go:243: (dbg) docker inspect no-preload-188814:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032",
	        "Created": "2025-10-27T22:37:08.821298922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 727163,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:38:29.363415345Z",
	            "FinishedAt": "2025-10-27T22:38:28.49001921Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/hostname",
	        "HostsPath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/hosts",
	        "LogPath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032-json.log",
	        "Name": "/no-preload-188814",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-188814:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-188814",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032",
	                "LowerDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-188814",
	                "Source": "/var/lib/docker/volumes/no-preload-188814/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-188814",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-188814",
	                "name.minikube.sigs.k8s.io": "no-preload-188814",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63e15a75b0d8c02d9030a966aac6f56bb0bce0111714de2c2fdf47dbc470016f",
	            "SandboxKey": "/var/run/docker/netns/63e15a75b0d8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-188814": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:62:17:5e:f6:35",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae03ff1f23a640f11de7d6590557c58c27007a2db36f9f0148ee4c491af73383",
	                    "EndpointID": "f03f9eb7b1dd8bbfadf4f418c6bd54b85e50fce14b3dd1541d7fb5737357a740",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-188814",
	                        "5aadc4ee2b12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188814 -n no-preload-188814
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188814 -n no-preload-188814: exit status 2 (347.010311ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-188814 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-188814 logs -n 25: (1.297142542s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p NoKubernetes-565903                                                                                                                                                                                                                        │ NoKubernetes-565903          │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-908589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │                     │
	│ stop    │ -p old-k8s-version-908589 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-908589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-188814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ start   │ -p cert-expiration-219241 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-219241       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ stop    │ -p no-preload-188814 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p cert-expiration-219241                                                                                                                                                                                                                     │ cert-expiration-219241       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ addons  │ enable dashboard -p no-preload-188814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ image   │ old-k8s-version-908589 image list --format=json                                                                                                                                                                                               │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ pause   │ -p old-k8s-version-908589 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ delete  │ -p old-k8s-version-908589                                                                                                                                                                                                                     │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p old-k8s-version-908589                                                                                                                                                                                                                     │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p disable-driver-mounts-617659                                                                                                                                                                                                               │ disable-driver-mounts-617659 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-829976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ stop    │ -p embed-certs-829976 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ image   │ no-preload-188814 image list --format=json                                                                                                                                                                                                    │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ pause   │ -p no-preload-188814 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:39:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:39:25.857965  739756 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:39:25.858066  739756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:25.858074  739756 out.go:374] Setting ErrFile to fd 2...
	I1027 22:39:25.858078  739756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:25.858298  739756 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:39:25.858752  739756 out.go:368] Setting JSON to false
	I1027 22:39:25.860335  739756 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8505,"bootTime":1761596261,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:39:25.860450  739756 start.go:143] virtualization: kvm guest
	I1027 22:39:25.861980  739756 out.go:179] * [kubernetes-upgrade-695499] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:39:25.863217  739756 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:39:25.863253  739756 notify.go:221] Checking for updates...
	I1027 22:39:25.865130  739756 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:39:25.866179  739756 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:39:25.867126  739756 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:39:25.868066  739756 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:39:25.869045  739756 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:39:25.870374  739756 config.go:182] Loaded profile config "kubernetes-upgrade-695499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:25.870977  739756 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:39:25.897209  739756 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:39:25.897290  739756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:39:25.957617  739756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:87 OomKillDisable:false NGoroutines:89 SystemTime:2025-10-27 22:39:25.946038708 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:39:25.957730  739756 docker.go:318] overlay module found
	I1027 22:39:25.959486  739756 out.go:179] * Using the docker driver based on existing profile
	I1027 22:39:25.960476  739756 start.go:307] selected driver: docker
	I1027 22:39:25.960491  739756 start.go:928] validating driver "docker" against &{Name:kubernetes-upgrade-695499 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-695499 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:39:25.960586  739756 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:39:25.961182  739756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:39:26.025272  739756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:87 OomKillDisable:false NGoroutines:89 SystemTime:2025-10-27 22:39:26.015161596 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:39:26.025574  739756 cni.go:84] Creating CNI manager for ""
	I1027 22:39:26.025635  739756 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:26.025664  739756 start.go:351] cluster config:
	{Name:kubernetes-upgrade-695499 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-695499 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:39:26.027430  739756 out.go:179] * Starting "kubernetes-upgrade-695499" primary control-plane node in "kubernetes-upgrade-695499" cluster
	I1027 22:39:26.028584  739756 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:39:26.029756  739756 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:39:26.030638  739756 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:26.030681  739756 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:39:26.030704  739756 cache.go:59] Caching tarball of preloaded images
	I1027 22:39:26.030730  739756 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:39:26.030792  739756 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:39:26.030809  739756 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:39:26.030895  739756 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/config.json ...
	I1027 22:39:26.052085  739756 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:39:26.052118  739756 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:39:26.052139  739756 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:39:26.052171  739756 start.go:360] acquireMachinesLock for kubernetes-upgrade-695499: {Name:mkec40f5d86362c3c0e1baba0d014c7a6178b3d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:39:26.052239  739756 start.go:364] duration metric: took 44.271µs to acquireMachinesLock for "kubernetes-upgrade-695499"
	I1027 22:39:26.052262  739756 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:39:26.052269  739756 fix.go:55] fixHost starting: 
	I1027 22:39:26.052609  739756 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-695499 --format={{.State.Status}}
	I1027 22:39:26.070064  739756 fix.go:113] recreateIfNeeded on kubernetes-upgrade-695499: state=Running err=<nil>
	W1027 22:39:26.070092  739756 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 22:39:26.071768  739756 out.go:252] * Updating the running docker "kubernetes-upgrade-695499" container ...
	I1027 22:39:26.071813  739756 machine.go:94] provisionDockerMachine start ...
	I1027 22:39:26.071895  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:26.089418  739756 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:26.089787  739756 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1027 22:39:26.089806  739756 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:39:26.231976  739756 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-695499
	
	I1027 22:39:26.232007  739756 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-695499"
	I1027 22:39:26.232072  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:26.249694  739756 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:26.249999  739756 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1027 22:39:26.250026  739756 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-695499 && echo "kubernetes-upgrade-695499" | sudo tee /etc/hostname
	I1027 22:39:26.404250  739756 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-695499
	
	I1027 22:39:26.404404  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:26.425591  739756 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:26.425836  739756 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1027 22:39:26.425867  739756 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-695499' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-695499/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-695499' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:39:26.573077  739756 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:39:26.573105  739756 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:39:26.573148  739756 ubuntu.go:190] setting up certificates
	I1027 22:39:26.573176  739756 provision.go:84] configureAuth start
	I1027 22:39:26.573244  739756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-695499
	I1027 22:39:26.591520  739756 provision.go:143] copyHostCerts
	I1027 22:39:26.591585  739756 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:39:26.591609  739756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:39:26.591675  739756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:39:26.591781  739756 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:39:26.591790  739756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:39:26.591818  739756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:39:26.591889  739756 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:39:26.591897  739756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:39:26.591920  739756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:39:26.592028  739756 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-695499 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-695499 localhost minikube]
	I1027 22:39:26.904780  739756 provision.go:177] copyRemoteCerts
	I1027 22:39:26.904849  739756 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:39:26.904889  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:26.921853  739756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kubernetes-upgrade-695499/id_rsa Username:docker}
	I1027 22:39:27.025208  739756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:39:27.048577  739756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1027 22:39:27.065640  739756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:39:27.082754  739756 provision.go:87] duration metric: took 509.560693ms to configureAuth
	I1027 22:39:27.082789  739756 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:39:27.083008  739756 config.go:182] Loaded profile config "kubernetes-upgrade-695499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:27.083126  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:27.101126  739756 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:27.101336  739756 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1027 22:39:27.101352  739756 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:39:27.603549  739756 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:39:27.603581  739756 machine.go:97] duration metric: took 1.531754122s to provisionDockerMachine
	I1027 22:39:27.603596  739756 start.go:293] postStartSetup for "kubernetes-upgrade-695499" (driver="docker")
	I1027 22:39:27.603610  739756 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:39:27.603710  739756 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:39:27.603753  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:27.624457  739756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kubernetes-upgrade-695499/id_rsa Username:docker}
	I1027 22:39:27.724540  739756 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:39:27.728307  739756 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:39:27.728337  739756 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:39:27.728349  739756 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:39:27.728410  739756 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:39:27.728492  739756 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:39:27.728637  739756 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:39:27.737612  739756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:27.757075  739756 start.go:296] duration metric: took 153.462521ms for postStartSetup
	I1027 22:39:27.757158  739756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:39:27.757212  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:27.777543  739756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kubernetes-upgrade-695499/id_rsa Username:docker}
	I1027 22:39:27.875256  739756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:39:27.880266  739756 fix.go:57] duration metric: took 1.827990745s for fixHost
	I1027 22:39:27.880294  739756 start.go:83] releasing machines lock for "kubernetes-upgrade-695499", held for 1.828041785s
	I1027 22:39:27.880360  739756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-695499
	I1027 22:39:27.898266  739756 ssh_runner.go:195] Run: cat /version.json
	I1027 22:39:27.898328  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:27.898361  739756 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:39:27.898424  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:27.915300  739756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kubernetes-upgrade-695499/id_rsa Username:docker}
	I1027 22:39:27.915558  739756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kubernetes-upgrade-695499/id_rsa Username:docker}
	I1027 22:39:28.011860  739756 ssh_runner.go:195] Run: systemctl --version
	I1027 22:39:28.069614  739756 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:39:28.111144  739756 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:39:28.115927  739756 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:39:28.116002  739756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:39:28.123789  739756 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:39:28.123808  739756 start.go:496] detecting cgroup driver to use...
	I1027 22:39:28.123840  739756 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:39:28.123888  739756 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:39:28.138063  739756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:39:28.150128  739756 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:39:28.150176  739756 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:39:28.163915  739756 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:39:28.175760  739756 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:39:28.286031  739756 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:39:28.386141  739756 docker.go:234] disabling docker service ...
	I1027 22:39:28.386213  739756 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:39:28.401977  739756 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:39:28.415829  739756 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:39:28.514536  739756 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:39:28.626432  739756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:39:28.641598  739756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:39:28.657047  739756 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:39:28.657121  739756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:28.666482  739756 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:39:28.666540  739756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:28.675851  739756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:28.694744  739756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:28.705573  739756 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:39:28.714470  739756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:28.723185  739756 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:28.731403  739756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:28.740696  739756 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:39:28.748075  739756 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:39:28.755333  739756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:28.859715  739756 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:39:29.008032  739756 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:39:29.008095  739756 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:39:29.012535  739756 start.go:564] Will wait 60s for crictl version
	I1027 22:39:29.012607  739756 ssh_runner.go:195] Run: which crictl
	I1027 22:39:29.016580  739756 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:39:29.045905  739756 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:39:29.046028  739756 ssh_runner.go:195] Run: crio --version
	I1027 22:39:29.076936  739756 ssh_runner.go:195] Run: crio --version
	I1027 22:39:29.108221  739756 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.744192845Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.748638756Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.748663606Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.921004947Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=9d0f4746-95c3-4457-95e3-9b4a63366983 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.921665655Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b82bc2f8-4e64-413f-bd12-db39e219c82f name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.923389782Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4822bc81-6c61-4ab0-ae21-bfa1e56a1528 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.927529482Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms/kubernetes-dashboard" id=6ba10782-4b35-4cd5-8968-986e63c5b527 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.927658627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.931851685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.932118038Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d3c9ed15123039bdf9fe249026e62e8265f74879469924345afb39580715aa46/merged/etc/group: no such file or directory"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.932561718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.96626447Z" level=info msg="Created container d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms/kubernetes-dashboard" id=6ba10782-4b35-4cd5-8968-986e63c5b527 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.966884007Z" level=info msg="Starting container: d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd" id=23eef332-42a1-4cbe-a119-4e8fad8a4462 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.968716212Z" level=info msg="Started container" PID=1718 containerID=d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms/kubernetes-dashboard id=23eef332-42a1-4cbe-a119-4e8fad8a4462 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fafe6bba9c65f4f40ecb5d857f6f36f3715aa0cd77dc703c90d7726140c83746
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.799726306Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9617a45a-b8ef-43a9-b9df-7499eedaf9e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.800808595Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=398f38bb-44f5-41bd-acff-bf9aa03b2881 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.80189395Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq/dashboard-metrics-scraper" id=2ec4d0f9-89ab-45d9-a9e0-e3f07722f922 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.80206213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.807617149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.808096557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.841704647Z" level=info msg="Created container 03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq/dashboard-metrics-scraper" id=2ec4d0f9-89ab-45d9-a9e0-e3f07722f922 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.84231429Z" level=info msg="Starting container: 03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f" id=b8bc6aed-8c06-42db-89b0-02a5bbbfc175 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.843979683Z" level=info msg="Started container" PID=1738 containerID=03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq/dashboard-metrics-scraper id=b8bc6aed-8c06-42db-89b0-02a5bbbfc175 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d4aed68b94cddd094744a3fc9d46a85b4f3ab6c82cf1fbd819fcfceb4e54075
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.937009577Z" level=info msg="Removing container: 289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265" id=ff775b84-f2d2-43b2-847b-a574628acd8c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.947262006Z" level=info msg="Removed container 289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq/dashboard-metrics-scraper" id=ff775b84-f2d2-43b2-847b-a574628acd8c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	03e8b617bf379       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   7d4aed68b94cd       dashboard-metrics-scraper-6ffb444bf9-dxrwq   kubernetes-dashboard
	d56d3a6ef1dde       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   fafe6bba9c65f       kubernetes-dashboard-855c9754f9-6rnms        kubernetes-dashboard
	a29b5b5b38c62       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Running             storage-provisioner         1                   62d7454d87cee       storage-provisioner                          kube-system
	27919fedcb8fe       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   d402c6aab6e34       coredns-66bc5c9577-m8lfc                     kube-system
	e3d94fe20b04d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   b3fab1a7da743       busybox                                      default
	03b2ccc9d6b69       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   69ebabfef1dd1       kindnet-thlc6                                kube-system
	bf414ccda38a6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   62d7454d87cee       storage-provisioner                          kube-system
	b65d6d450bfe8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   73eeb7fbf23a2       kube-proxy-4nwvc                             kube-system
	cb9c2393e5478       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   acf4b609a260e       etcd-no-preload-188814                       kube-system
	221d83fbd9034       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   bafe14750757c       kube-apiserver-no-preload-188814             kube-system
	002c10e5f271a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   0218110d515f2       kube-scheduler-no-preload-188814             kube-system
	da762329de2a8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   e727e2a0ab4fe       kube-controller-manager-no-preload-188814    kube-system
	
	
	==> coredns [27919fedcb8feaa43c4a00ba37dfeb16c6adca323954d5ba6144478dd68929b0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46833 - 41502 "HINFO IN 5787618683313925941.7437279659148615852. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03251686s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-188814
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-188814
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=no-preload-188814
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_37_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:37:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-188814
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:39:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:39:09 +0000   Mon, 27 Oct 2025 22:37:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:39:09 +0000   Mon, 27 Oct 2025 22:37:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:39:09 +0000   Mon, 27 Oct 2025 22:37:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:39:09 +0000   Mon, 27 Oct 2025 22:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-188814
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                9b25c6cb-fee1-43be-8dc1-88bc737c041a
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-m8lfc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-no-preload-188814                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-thlc6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-188814              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-188814     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-4nwvc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-188814              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dxrwq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6rnms         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node no-preload-188814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node no-preload-188814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node no-preload-188814 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node no-preload-188814 event: Registered Node no-preload-188814 in Controller
	  Normal  NodeReady                94s                kubelet          Node no-preload-188814 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node no-preload-188814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node no-preload-188814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node no-preload-188814 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node no-preload-188814 event: Registered Node no-preload-188814 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [cb9c2393e547842667e6423cc2d69ddfd9af4a1579d9d9531bc90992a0e1b634] <==
	{"level":"warn","ts":"2025-10-27T22:38:37.631324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.640583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.653320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.664037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.677565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.694150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.718884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.728181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.737541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.764463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.785988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.807465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.823804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.836703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.848451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.861201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.872832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.884560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.893786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.904703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.924182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.941494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.951836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.065391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38806","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T22:38:53.587872Z","caller":"traceutil/trace.go:172","msg":"trace[466806980] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"111.828043ms","start":"2025-10-27T22:38:53.476028Z","end":"2025-10-27T22:38:53.587856Z","steps":["trace[466806980] 'process raft request'  (duration: 111.702054ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:39:30 up  2:21,  0 user,  load average: 3.54, 2.72, 2.75
	Linux no-preload-188814 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03b2ccc9d6b692141146e0afcaf3653fe9df218b37a3f09868f8fb07bbeeac91] <==
	I1027 22:38:39.480522       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:38:39.480794       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1027 22:38:39.481015       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:38:39.481600       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:38:39.482069       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:38:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:38:39.683811       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:38:39.683853       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:38:39.683865       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:38:39.684029       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:38:40.077303       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:38:40.077334       1 metrics.go:72] Registering metrics
	I1027 22:38:40.077386       1 controller.go:711] "Syncing nftables rules"
	I1027 22:38:49.685052       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:38:49.685126       1 main.go:301] handling current node
	I1027 22:38:59.684059       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:38:59.684088       1 main.go:301] handling current node
	I1027 22:39:09.686062       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:39:09.686096       1 main.go:301] handling current node
	I1027 22:39:19.685166       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:39:19.685287       1 main.go:301] handling current node
	I1027 22:39:29.690173       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:39:29.690216       1 main.go:301] handling current node
	
	
	==> kube-apiserver [221d83fbd903479a3c762233eb12a7ec04e14004807c2ce9ea61f8e212524c54] <==
	I1027 22:38:38.786808       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 22:38:38.786842       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 22:38:38.786856       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 22:38:38.786968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 22:38:38.789353       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 22:38:38.789411       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 22:38:38.789453       1 aggregator.go:171] initial CRD sync complete...
	I1027 22:38:38.789472       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 22:38:38.789480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:38:38.789485       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:38:38.796864       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:38:38.827261       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:38:38.848196       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:38:38.858719       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 22:38:38.954735       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:38:39.239151       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:38:39.314001       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:38:39.353139       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:38:39.364163       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:38:39.420899       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.241.236"}
	I1027 22:38:39.431924       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.44.178"}
	I1027 22:38:39.682507       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:38:42.465324       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:38:42.665618       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:38:42.764154       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [da762329de2a8c6c1610d73b7afd01c216fefae715c921b854c125c03fe0ac85] <==
	I1027 22:38:42.211633       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 22:38:42.212534       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:38:42.212670       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:38:42.212762       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-188814"
	I1027 22:38:42.212850       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 22:38:42.215916       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 22:38:42.235615       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:38:42.242766       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:38:42.246081       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 22:38:42.249335       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 22:38:42.253607       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:38:42.256341       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:38:42.259697       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:38:42.260020       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:38:42.260032       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 22:38:42.260056       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 22:38:42.260152       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 22:38:42.260187       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 22:38:42.260430       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 22:38:42.260433       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:38:42.262439       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 22:38:42.262464       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 22:38:42.264731       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:38:42.264740       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 22:38:42.282017       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b65d6d450bfe889472b2618326bf68f2932d48e1ad884af95ec5a48f72d99f28] <==
	I1027 22:38:39.300755       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:38:39.376741       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:38:39.477387       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:38:39.477425       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1027 22:38:39.477526       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:38:39.499206       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:38:39.499266       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:38:39.505413       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:38:39.505828       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:38:39.505854       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:38:39.507348       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:38:39.507841       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:38:39.507436       1 config.go:309] "Starting node config controller"
	I1027 22:38:39.508144       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:38:39.508161       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:38:39.507365       1 config.go:200] "Starting service config controller"
	I1027 22:38:39.508171       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:38:39.507441       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:38:39.508194       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:38:39.608310       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:38:39.608335       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:38:39.608374       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [002c10e5f271a370eae7e9ac4bbcfa8188b01c92b6b9cb7d034828d114167209] <==
	I1027 22:38:37.648078       1 serving.go:386] Generated self-signed cert in-memory
	W1027 22:38:38.756815       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:38:38.756873       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:38:38.756887       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:38:38.756898       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:38:38.788339       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:38:38.788384       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:38:38.797618       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:38:38.798354       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:38:38.799468       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:38:38.799565       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:38:38.898988       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:38:42 no-preload-188814 kubelet[707]: I1027 22:38:42.977562     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrkn7\" (UniqueName: \"kubernetes.io/projected/4fdc4f52-990c-4a10-9be5-3f62c053b5f0-kube-api-access-nrkn7\") pod \"dashboard-metrics-scraper-6ffb444bf9-dxrwq\" (UID: \"4fdc4f52-990c-4a10-9be5-3f62c053b5f0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq"
	Oct 27 22:38:42 no-preload-188814 kubelet[707]: I1027 22:38:42.977586     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/95a930ae-c927-4ee0-88ae-5ceaa45d8edc-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6rnms\" (UID: \"95a930ae-c927-4ee0-88ae-5ceaa45d8edc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms"
	Oct 27 22:38:42 no-preload-188814 kubelet[707]: I1027 22:38:42.977600     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5xw\" (UniqueName: \"kubernetes.io/projected/95a930ae-c927-4ee0-88ae-5ceaa45d8edc-kube-api-access-kc5xw\") pod \"kubernetes-dashboard-855c9754f9-6rnms\" (UID: \"95a930ae-c927-4ee0-88ae-5ceaa45d8edc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms"
	Oct 27 22:38:43 no-preload-188814 kubelet[707]: I1027 22:38:43.470702     707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 22:38:45 no-preload-188814 kubelet[707]: I1027 22:38:45.868402     707 scope.go:117] "RemoveContainer" containerID="529e6ce96f2d27d147bff38c2fb2d68470ab033828fe50847640b43942945199"
	Oct 27 22:38:46 no-preload-188814 kubelet[707]: I1027 22:38:46.878307     707 scope.go:117] "RemoveContainer" containerID="529e6ce96f2d27d147bff38c2fb2d68470ab033828fe50847640b43942945199"
	Oct 27 22:38:46 no-preload-188814 kubelet[707]: I1027 22:38:46.878639     707 scope.go:117] "RemoveContainer" containerID="289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265"
	Oct 27 22:38:46 no-preload-188814 kubelet[707]: E1027 22:38:46.878835     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:38:47 no-preload-188814 kubelet[707]: I1027 22:38:47.885023     707 scope.go:117] "RemoveContainer" containerID="289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265"
	Oct 27 22:38:47 no-preload-188814 kubelet[707]: E1027 22:38:47.885203     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:38:51 no-preload-188814 kubelet[707]: I1027 22:38:51.866070     707 scope.go:117] "RemoveContainer" containerID="289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265"
	Oct 27 22:38:51 no-preload-188814 kubelet[707]: E1027 22:38:51.866468     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:38:54 no-preload-188814 kubelet[707]: I1027 22:38:54.434201     707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms" podStartSLOduration=5.676763915 podStartE2EDuration="12.434178793s" podCreationTimestamp="2025-10-27 22:38:42 +0000 UTC" firstStartedPulling="2025-10-27 22:38:43.165325891 +0000 UTC m=+7.464511720" lastFinishedPulling="2025-10-27 22:38:49.922740769 +0000 UTC m=+14.221926598" observedRunningTime="2025-10-27 22:38:50.909205418 +0000 UTC m=+15.208391272" watchObservedRunningTime="2025-10-27 22:38:54.434178793 +0000 UTC m=+18.733364633"
	Oct 27 22:39:06 no-preload-188814 kubelet[707]: I1027 22:39:06.799158     707 scope.go:117] "RemoveContainer" containerID="289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265"
	Oct 27 22:39:06 no-preload-188814 kubelet[707]: I1027 22:39:06.935646     707 scope.go:117] "RemoveContainer" containerID="289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265"
	Oct 27 22:39:06 no-preload-188814 kubelet[707]: I1027 22:39:06.935874     707 scope.go:117] "RemoveContainer" containerID="03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f"
	Oct 27 22:39:06 no-preload-188814 kubelet[707]: E1027 22:39:06.936113     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:39:11 no-preload-188814 kubelet[707]: I1027 22:39:11.866930     707 scope.go:117] "RemoveContainer" containerID="03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f"
	Oct 27 22:39:11 no-preload-188814 kubelet[707]: E1027 22:39:11.867166     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:39:25 no-preload-188814 kubelet[707]: I1027 22:39:25.799409     707 scope.go:117] "RemoveContainer" containerID="03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f"
	Oct 27 22:39:25 no-preload-188814 kubelet[707]: E1027 22:39:25.799638     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:39:27 no-preload-188814 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:39:27 no-preload-188814 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:39:27 no-preload-188814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 22:39:27 no-preload-188814 systemd[1]: kubelet.service: Consumed 1.664s CPU time.
	
	
	==> kubernetes-dashboard [d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd] <==
	2025/10/27 22:38:49 Starting overwatch
	2025/10/27 22:38:49 Using namespace: kubernetes-dashboard
	2025/10/27 22:38:49 Using in-cluster config to connect to apiserver
	2025/10/27 22:38:49 Using secret token for csrf signing
	2025/10/27 22:38:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 22:38:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 22:38:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 22:38:50 Generating JWE encryption key
	2025/10/27 22:38:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 22:38:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 22:38:50 Initializing JWE encryption key from synchronized object
	2025/10/27 22:38:50 Creating in-cluster Sidecar client
	2025/10/27 22:38:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:38:50 Serving insecurely on HTTP port: 9090
	2025/10/27 22:39:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a29b5b5b38c62144f9600bfdf7a35a1afbb4a79f4066d872710ac5cc46b01177] <==
	W1027 22:39:05.411814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:07.415619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:07.420934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:09.424123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:09.428181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:11.431506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:11.435259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:13.439780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:13.445189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:15.449023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:15.454034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:17.457580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:17.461223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:19.465649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:19.471036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:21.475301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:21.479068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:23.482505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:23.486968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:25.490335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:25.494766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:27.497588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:27.506369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:29.509581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:29.513739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bf414ccda38a64edfa3182d7b6c18f2e34500e5bba5df0ab6392597ef8eabd7d] <==
	I1027 22:38:39.244455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 22:38:39.248174       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-188814 -n no-preload-188814
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-188814 -n no-preload-188814: exit status 2 (418.445923ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-188814 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-188814
helpers_test.go:243: (dbg) docker inspect no-preload-188814:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032",
	        "Created": "2025-10-27T22:37:08.821298922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 727163,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:38:29.363415345Z",
	            "FinishedAt": "2025-10-27T22:38:28.49001921Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/hostname",
	        "HostsPath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/hosts",
	        "LogPath": "/var/lib/docker/containers/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032/5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032-json.log",
	        "Name": "/no-preload-188814",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-188814:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-188814",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5aadc4ee2b1279ae859d188f8c53aa79145edbda06c3a5643df1797285cfc032",
	                "LowerDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c8f1633c4e360ceba6dcb27f8fa7353c671eb437ecac655d12f52871bc11761/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-188814",
	                "Source": "/var/lib/docker/volumes/no-preload-188814/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-188814",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-188814",
	                "name.minikube.sigs.k8s.io": "no-preload-188814",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63e15a75b0d8c02d9030a966aac6f56bb0bce0111714de2c2fdf47dbc470016f",
	            "SandboxKey": "/var/run/docker/netns/63e15a75b0d8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-188814": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:62:17:5e:f6:35",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae03ff1f23a640f11de7d6590557c58c27007a2db36f9f0148ee4c491af73383",
	                    "EndpointID": "f03f9eb7b1dd8bbfadf4f418c6bd54b85e50fce14b3dd1541d7fb5737357a740",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-188814",
	                        "5aadc4ee2b12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188814 -n no-preload-188814
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188814 -n no-preload-188814: exit status 2 (383.426563ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-188814 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-188814 logs -n 25: (1.101160947s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-908589 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-908589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:37 UTC │
	│ start   │ -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:37 UTC │ 27 Oct 25 22:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-188814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ start   │ -p cert-expiration-219241 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-219241       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ stop    │ -p no-preload-188814 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p cert-expiration-219241                                                                                                                                                                                                                     │ cert-expiration-219241       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ addons  │ enable dashboard -p no-preload-188814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ image   │ old-k8s-version-908589 image list --format=json                                                                                                                                                                                               │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ pause   │ -p old-k8s-version-908589 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ delete  │ -p old-k8s-version-908589                                                                                                                                                                                                                     │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p old-k8s-version-908589                                                                                                                                                                                                                     │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p disable-driver-mounts-617659                                                                                                                                                                                                               │ disable-driver-mounts-617659 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-829976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ stop    │ -p embed-certs-829976 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ image   │ no-preload-188814 image list --format=json                                                                                                                                                                                                    │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ pause   │ -p no-preload-188814 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-829976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-695499                                                                                                                                                                                                                  │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:39:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:39:30.727481  741885 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:39:30.727627  741885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:30.727640  741885 out.go:374] Setting ErrFile to fd 2...
	I1027 22:39:30.727648  741885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:30.727974  741885 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:39:30.728585  741885 out.go:368] Setting JSON to false
	I1027 22:39:30.730005  741885 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8510,"bootTime":1761596261,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:39:30.730104  741885 start.go:143] virtualization: kvm guest
	I1027 22:39:30.731791  741885 out.go:179] * [embed-certs-829976] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:39:30.732899  741885 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:39:30.732902  741885 notify.go:221] Checking for updates...
	I1027 22:39:30.735368  741885 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:39:30.736459  741885 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:39:30.737481  741885 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:39:30.738537  741885 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:39:30.739499  741885 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:39:30.743360  741885 config.go:182] Loaded profile config "embed-certs-829976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:30.744095  741885 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:39:30.771029  741885 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:39:30.771153  741885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:39:30.841693  741885 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:39:30.828533646 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:39:30.841848  741885 docker.go:318] overlay module found
	I1027 22:39:30.846445  741885 out.go:179] * Using the docker driver based on existing profile
	I1027 22:39:30.254408  739756 kapi.go:59] client config for kubernetes-upgrade-695499: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/client.key", CAFile:"/home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c7c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 22:39:30.254866  739756 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-695499"
	W1027 22:39:30.254893  739756 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:39:30.254930  739756 host.go:66] Checking if "kubernetes-upgrade-695499" exists ...
	I1027 22:39:30.255402  739756 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-695499 --format={{.State.Status}}
	I1027 22:39:30.255453  739756 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:39:30.255466  739756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:39:30.255514  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:30.280936  739756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kubernetes-upgrade-695499/id_rsa Username:docker}
	I1027 22:39:30.284140  739756 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:39:30.284166  739756 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:39:30.284226  739756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-695499
	I1027 22:39:30.309404  739756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kubernetes-upgrade-695499/id_rsa Username:docker}
	I1027 22:39:30.372435  739756 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:39:30.392994  739756 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:39:30.393073  739756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:39:30.406801  739756 api_server.go:72] duration metric: took 176.275265ms to wait for apiserver process to appear ...
	I1027 22:39:30.406829  739756 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:39:30.406894  739756 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:39:30.413151  739756 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 22:39:30.423073  739756 api_server.go:141] control plane version: v1.34.1
	I1027 22:39:30.423108  739756 api_server.go:131] duration metric: took 16.271339ms to wait for apiserver health ...
	I1027 22:39:30.423119  739756 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:39:30.426851  739756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:39:30.430654  739756 system_pods.go:59] 9 kube-system pods found
	I1027 22:39:30.430754  739756 system_pods.go:61] "coredns-66bc5c9577-twvfw" [ecf10597-f8b0-4094-8e80-92508599a88c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:39:30.430802  739756 system_pods.go:61] "coredns-66bc5c9577-zj9pq" [ecdb7b27-03c8-45ee-bbd9-e5db0f7c8200] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:39:30.430830  739756 system_pods.go:61] "etcd-kubernetes-upgrade-695499" [b850192e-cdbd-40d7-a186-103159c9700b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:39:30.430839  739756 system_pods.go:61] "kindnet-pn6mn" [23d406b8-2b53-4524-88b9-26f33ec01eb0] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 22:39:30.430849  739756 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-695499" [009b6086-ab29-4ba7-b429-45f3c1e1dc6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:39:30.430858  739756 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-695499" [49e5d334-8f0d-43eb-bd92-89f06d008bd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:39:30.430867  739756 system_pods.go:61] "kube-proxy-5pfhb" [27267f54-109b-403a-b206-5984897baea4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 22:39:30.430873  739756 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-695499" [7aa34729-e0e3-4ab3-b7fa-6b8361cefd0f] Running
	I1027 22:39:30.430879  739756 system_pods.go:61] "storage-provisioner" [f8e30b45-4d95-482a-b935-b1e38c3e1d7b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:39:30.430888  739756 system_pods.go:74] duration metric: took 7.761986ms to wait for pod list to return data ...
	I1027 22:39:30.430904  739756 kubeadm.go:587] duration metric: took 200.383099ms to wait for: map[apiserver:true system_pods:true]
	I1027 22:39:30.430921  739756 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:39:30.431245  739756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:39:30.445362  739756 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:39:30.445392  739756 node_conditions.go:123] node cpu capacity is 8
	I1027 22:39:30.445406  739756 node_conditions.go:105] duration metric: took 14.479544ms to run NodePressure ...
	I1027 22:39:30.445423  739756 start.go:242] waiting for startup goroutines ...
	I1027 22:39:30.847631  741885 start.go:307] selected driver: docker
	I1027 22:39:30.847661  741885 start.go:928] validating driver "docker" against &{Name:embed-certs-829976 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:39:30.847782  741885 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:39:30.848738  741885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:39:30.939739  741885 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:39:30.924341067 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:39:30.940193  741885 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:39:30.940248  741885 cni.go:84] Creating CNI manager for ""
	I1027 22:39:30.940336  741885 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:30.940434  741885 start.go:351] cluster config:
	{Name:embed-certs-829976 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:39:30.942277  741885 out.go:179] * Starting "embed-certs-829976" primary control-plane node in "embed-certs-829976" cluster
	I1027 22:39:30.943311  741885 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:39:30.944477  741885 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:39:30.945580  741885 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:30.945631  741885 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:39:30.945644  741885 cache.go:59] Caching tarball of preloaded images
	I1027 22:39:30.945681  741885 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:39:30.945760  741885 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:39:30.945774  741885 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:39:30.945893  741885 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/config.json ...
	I1027 22:39:30.970221  741885 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:39:30.970256  741885 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:39:30.970281  741885 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:39:30.970316  741885 start.go:360] acquireMachinesLock for embed-certs-829976: {Name:mkb6532b5a873894095ab02df76bd0a154f264d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:39:30.970403  741885 start.go:364] duration metric: took 59.387µs to acquireMachinesLock for "embed-certs-829976"
	I1027 22:39:30.970429  741885 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:39:30.970437  741885 fix.go:55] fixHost starting: 
	I1027 22:39:30.970754  741885 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:39:30.993616  741885 fix.go:113] recreateIfNeeded on embed-certs-829976: state=Stopped err=<nil>
	W1027 22:39:30.993655  741885 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 22:39:31.018671  739756 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 22:39:31.020014  739756 addons.go:514] duration metric: took 789.409993ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 22:39:31.020062  739756 start.go:247] waiting for cluster config update ...
	I1027 22:39:31.020079  739756 start.go:256] writing updated cluster config ...
	I1027 22:39:31.020384  739756 ssh_runner.go:195] Run: rm -f paused
	I1027 22:39:31.087818  739756 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:39:31.089390  739756 out.go:179] * Done! kubectl is now configured to use "kubernetes-upgrade-695499" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.744192845Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.748638756Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.748663606Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.921004947Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=9d0f4746-95c3-4457-95e3-9b4a63366983 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.921665655Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b82bc2f8-4e64-413f-bd12-db39e219c82f name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.923389782Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4822bc81-6c61-4ab0-ae21-bfa1e56a1528 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.927529482Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms/kubernetes-dashboard" id=6ba10782-4b35-4cd5-8968-986e63c5b527 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.927658627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.931851685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.932118038Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d3c9ed15123039bdf9fe249026e62e8265f74879469924345afb39580715aa46/merged/etc/group: no such file or directory"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.932561718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.96626447Z" level=info msg="Created container d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms/kubernetes-dashboard" id=6ba10782-4b35-4cd5-8968-986e63c5b527 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.966884007Z" level=info msg="Starting container: d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd" id=23eef332-42a1-4cbe-a119-4e8fad8a4462 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:38:49 no-preload-188814 crio[558]: time="2025-10-27T22:38:49.968716212Z" level=info msg="Started container" PID=1718 containerID=d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms/kubernetes-dashboard id=23eef332-42a1-4cbe-a119-4e8fad8a4462 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fafe6bba9c65f4f40ecb5d857f6f36f3715aa0cd77dc703c90d7726140c83746
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.799726306Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9617a45a-b8ef-43a9-b9df-7499eedaf9e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.800808595Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=398f38bb-44f5-41bd-acff-bf9aa03b2881 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.80189395Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq/dashboard-metrics-scraper" id=2ec4d0f9-89ab-45d9-a9e0-e3f07722f922 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.80206213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.807617149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.808096557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.841704647Z" level=info msg="Created container 03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq/dashboard-metrics-scraper" id=2ec4d0f9-89ab-45d9-a9e0-e3f07722f922 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.84231429Z" level=info msg="Starting container: 03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f" id=b8bc6aed-8c06-42db-89b0-02a5bbbfc175 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.843979683Z" level=info msg="Started container" PID=1738 containerID=03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq/dashboard-metrics-scraper id=b8bc6aed-8c06-42db-89b0-02a5bbbfc175 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d4aed68b94cddd094744a3fc9d46a85b4f3ab6c82cf1fbd819fcfceb4e54075
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.937009577Z" level=info msg="Removing container: 289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265" id=ff775b84-f2d2-43b2-847b-a574628acd8c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:39:06 no-preload-188814 crio[558]: time="2025-10-27T22:39:06.947262006Z" level=info msg="Removed container 289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq/dashboard-metrics-scraper" id=ff775b84-f2d2-43b2-847b-a574628acd8c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	03e8b617bf379       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   7d4aed68b94cd       dashboard-metrics-scraper-6ffb444bf9-dxrwq   kubernetes-dashboard
	d56d3a6ef1dde       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   fafe6bba9c65f       kubernetes-dashboard-855c9754f9-6rnms        kubernetes-dashboard
	a29b5b5b38c62       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Running             storage-provisioner         1                   62d7454d87cee       storage-provisioner                          kube-system
	27919fedcb8fe       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   d402c6aab6e34       coredns-66bc5c9577-m8lfc                     kube-system
	e3d94fe20b04d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   b3fab1a7da743       busybox                                      default
	03b2ccc9d6b69       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   69ebabfef1dd1       kindnet-thlc6                                kube-system
	bf414ccda38a6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   62d7454d87cee       storage-provisioner                          kube-system
	b65d6d450bfe8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   73eeb7fbf23a2       kube-proxy-4nwvc                             kube-system
	cb9c2393e5478       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   acf4b609a260e       etcd-no-preload-188814                       kube-system
	221d83fbd9034       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   bafe14750757c       kube-apiserver-no-preload-188814             kube-system
	002c10e5f271a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   0218110d515f2       kube-scheduler-no-preload-188814             kube-system
	da762329de2a8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   e727e2a0ab4fe       kube-controller-manager-no-preload-188814    kube-system
	
	
	==> coredns [27919fedcb8feaa43c4a00ba37dfeb16c6adca323954d5ba6144478dd68929b0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46833 - 41502 "HINFO IN 5787618683313925941.7437279659148615852. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03251686s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-188814
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-188814
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=no-preload-188814
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_37_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:37:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-188814
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:39:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:39:09 +0000   Mon, 27 Oct 2025 22:37:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:39:09 +0000   Mon, 27 Oct 2025 22:37:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:39:09 +0000   Mon, 27 Oct 2025 22:37:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:39:09 +0000   Mon, 27 Oct 2025 22:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-188814
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                9b25c6cb-fee1-43be-8dc1-88bc737c041a
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-m8lfc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-188814                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-thlc6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-188814              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-188814     200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-4nwvc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-188814              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dxrwq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6rnms         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-188814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-188814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-188814 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node no-preload-188814 event: Registered Node no-preload-188814 in Controller
	  Normal  NodeReady                96s                kubelet          Node no-preload-188814 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node no-preload-188814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node no-preload-188814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node no-preload-188814 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node no-preload-188814 event: Registered Node no-preload-188814 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [cb9c2393e547842667e6423cc2d69ddfd9af4a1579d9d9531bc90992a0e1b634] <==
	{"level":"warn","ts":"2025-10-27T22:38:37.631324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.640583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.653320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.664037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.677565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.694150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.718884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.728181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.737541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.764463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.785988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.807465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.823804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.836703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.848451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.861201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.872832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.884560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.893786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.904703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.924182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.941494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:37.951836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:38:38.065391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38806","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T22:38:53.587872Z","caller":"traceutil/trace.go:172","msg":"trace[466806980] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"111.828043ms","start":"2025-10-27T22:38:53.476028Z","end":"2025-10-27T22:38:53.587856Z","steps":["trace[466806980] 'process raft request'  (duration: 111.702054ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:39:32 up  2:21,  0 user,  load average: 3.54, 2.72, 2.75
	Linux no-preload-188814 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03b2ccc9d6b692141146e0afcaf3653fe9df218b37a3f09868f8fb07bbeeac91] <==
	I1027 22:38:39.480522       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:38:39.480794       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1027 22:38:39.481015       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:38:39.481600       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:38:39.482069       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:38:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:38:39.683811       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:38:39.683853       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:38:39.683865       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:38:39.684029       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:38:40.077303       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:38:40.077334       1 metrics.go:72] Registering metrics
	I1027 22:38:40.077386       1 controller.go:711] "Syncing nftables rules"
	I1027 22:38:49.685052       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:38:49.685126       1 main.go:301] handling current node
	I1027 22:38:59.684059       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:38:59.684088       1 main.go:301] handling current node
	I1027 22:39:09.686062       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:39:09.686096       1 main.go:301] handling current node
	I1027 22:39:19.685166       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:39:19.685287       1 main.go:301] handling current node
	I1027 22:39:29.690173       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 22:39:29.690216       1 main.go:301] handling current node
	
	
	==> kube-apiserver [221d83fbd903479a3c762233eb12a7ec04e14004807c2ce9ea61f8e212524c54] <==
	I1027 22:38:38.786808       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 22:38:38.786842       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 22:38:38.786856       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 22:38:38.786968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 22:38:38.789353       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 22:38:38.789411       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 22:38:38.789453       1 aggregator.go:171] initial CRD sync complete...
	I1027 22:38:38.789472       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 22:38:38.789480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:38:38.789485       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:38:38.796864       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:38:38.827261       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:38:38.848196       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:38:38.858719       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 22:38:38.954735       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:38:39.239151       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:38:39.314001       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:38:39.353139       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:38:39.364163       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:38:39.420899       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.241.236"}
	I1027 22:38:39.431924       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.44.178"}
	I1027 22:38:39.682507       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:38:42.465324       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:38:42.665618       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:38:42.764154       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [da762329de2a8c6c1610d73b7afd01c216fefae715c921b854c125c03fe0ac85] <==
	I1027 22:38:42.211633       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 22:38:42.212534       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:38:42.212670       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:38:42.212762       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-188814"
	I1027 22:38:42.212850       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 22:38:42.215916       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 22:38:42.235615       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:38:42.242766       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:38:42.246081       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 22:38:42.249335       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 22:38:42.253607       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:38:42.256341       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:38:42.259697       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:38:42.260020       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:38:42.260032       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 22:38:42.260056       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 22:38:42.260152       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 22:38:42.260187       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 22:38:42.260430       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 22:38:42.260433       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:38:42.262439       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 22:38:42.262464       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 22:38:42.264731       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:38:42.264740       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 22:38:42.282017       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b65d6d450bfe889472b2618326bf68f2932d48e1ad884af95ec5a48f72d99f28] <==
	I1027 22:38:39.300755       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:38:39.376741       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:38:39.477387       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:38:39.477425       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1027 22:38:39.477526       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:38:39.499206       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:38:39.499266       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:38:39.505413       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:38:39.505828       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:38:39.505854       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:38:39.507348       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:38:39.507841       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:38:39.507436       1 config.go:309] "Starting node config controller"
	I1027 22:38:39.508144       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:38:39.508161       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:38:39.507365       1 config.go:200] "Starting service config controller"
	I1027 22:38:39.508171       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:38:39.507441       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:38:39.508194       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:38:39.608310       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:38:39.608335       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:38:39.608374       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [002c10e5f271a370eae7e9ac4bbcfa8188b01c92b6b9cb7d034828d114167209] <==
	I1027 22:38:37.648078       1 serving.go:386] Generated self-signed cert in-memory
	W1027 22:38:38.756815       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:38:38.756873       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:38:38.756887       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:38:38.756898       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:38:38.788339       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:38:38.788384       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:38:38.797618       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:38:38.798354       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:38:38.799468       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:38:38.799565       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:38:38.898988       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:38:42 no-preload-188814 kubelet[707]: I1027 22:38:42.977562     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrkn7\" (UniqueName: \"kubernetes.io/projected/4fdc4f52-990c-4a10-9be5-3f62c053b5f0-kube-api-access-nrkn7\") pod \"dashboard-metrics-scraper-6ffb444bf9-dxrwq\" (UID: \"4fdc4f52-990c-4a10-9be5-3f62c053b5f0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq"
	Oct 27 22:38:42 no-preload-188814 kubelet[707]: I1027 22:38:42.977586     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/95a930ae-c927-4ee0-88ae-5ceaa45d8edc-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6rnms\" (UID: \"95a930ae-c927-4ee0-88ae-5ceaa45d8edc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms"
	Oct 27 22:38:42 no-preload-188814 kubelet[707]: I1027 22:38:42.977600     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5xw\" (UniqueName: \"kubernetes.io/projected/95a930ae-c927-4ee0-88ae-5ceaa45d8edc-kube-api-access-kc5xw\") pod \"kubernetes-dashboard-855c9754f9-6rnms\" (UID: \"95a930ae-c927-4ee0-88ae-5ceaa45d8edc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms"
	Oct 27 22:38:43 no-preload-188814 kubelet[707]: I1027 22:38:43.470702     707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 22:38:45 no-preload-188814 kubelet[707]: I1027 22:38:45.868402     707 scope.go:117] "RemoveContainer" containerID="529e6ce96f2d27d147bff38c2fb2d68470ab033828fe50847640b43942945199"
	Oct 27 22:38:46 no-preload-188814 kubelet[707]: I1027 22:38:46.878307     707 scope.go:117] "RemoveContainer" containerID="529e6ce96f2d27d147bff38c2fb2d68470ab033828fe50847640b43942945199"
	Oct 27 22:38:46 no-preload-188814 kubelet[707]: I1027 22:38:46.878639     707 scope.go:117] "RemoveContainer" containerID="289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265"
	Oct 27 22:38:46 no-preload-188814 kubelet[707]: E1027 22:38:46.878835     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:38:47 no-preload-188814 kubelet[707]: I1027 22:38:47.885023     707 scope.go:117] "RemoveContainer" containerID="289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265"
	Oct 27 22:38:47 no-preload-188814 kubelet[707]: E1027 22:38:47.885203     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:38:51 no-preload-188814 kubelet[707]: I1027 22:38:51.866070     707 scope.go:117] "RemoveContainer" containerID="289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265"
	Oct 27 22:38:51 no-preload-188814 kubelet[707]: E1027 22:38:51.866468     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:38:54 no-preload-188814 kubelet[707]: I1027 22:38:54.434201     707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6rnms" podStartSLOduration=5.676763915 podStartE2EDuration="12.434178793s" podCreationTimestamp="2025-10-27 22:38:42 +0000 UTC" firstStartedPulling="2025-10-27 22:38:43.165325891 +0000 UTC m=+7.464511720" lastFinishedPulling="2025-10-27 22:38:49.922740769 +0000 UTC m=+14.221926598" observedRunningTime="2025-10-27 22:38:50.909205418 +0000 UTC m=+15.208391272" watchObservedRunningTime="2025-10-27 22:38:54.434178793 +0000 UTC m=+18.733364633"
	Oct 27 22:39:06 no-preload-188814 kubelet[707]: I1027 22:39:06.799158     707 scope.go:117] "RemoveContainer" containerID="289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265"
	Oct 27 22:39:06 no-preload-188814 kubelet[707]: I1027 22:39:06.935646     707 scope.go:117] "RemoveContainer" containerID="289a5c2e06ca38fdc7cea27c3532874aad4e44642ef51dedc5bca9d0b73e2265"
	Oct 27 22:39:06 no-preload-188814 kubelet[707]: I1027 22:39:06.935874     707 scope.go:117] "RemoveContainer" containerID="03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f"
	Oct 27 22:39:06 no-preload-188814 kubelet[707]: E1027 22:39:06.936113     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:39:11 no-preload-188814 kubelet[707]: I1027 22:39:11.866930     707 scope.go:117] "RemoveContainer" containerID="03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f"
	Oct 27 22:39:11 no-preload-188814 kubelet[707]: E1027 22:39:11.867166     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:39:25 no-preload-188814 kubelet[707]: I1027 22:39:25.799409     707 scope.go:117] "RemoveContainer" containerID="03e8b617bf3797cd729f13fd5d5da2e56caa90c99549b0f9914bb9ea3e59513f"
	Oct 27 22:39:25 no-preload-188814 kubelet[707]: E1027 22:39:25.799638     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxrwq_kubernetes-dashboard(4fdc4f52-990c-4a10-9be5-3f62c053b5f0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxrwq" podUID="4fdc4f52-990c-4a10-9be5-3f62c053b5f0"
	Oct 27 22:39:27 no-preload-188814 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:39:27 no-preload-188814 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:39:27 no-preload-188814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 22:39:27 no-preload-188814 systemd[1]: kubelet.service: Consumed 1.664s CPU time.
	
	
	==> kubernetes-dashboard [d56d3a6ef1dde9a62cb0275fe4f0a2e1efd911aaa05d620d243b42b04c0c0dbd] <==
	2025/10/27 22:38:49 Starting overwatch
	2025/10/27 22:38:49 Using namespace: kubernetes-dashboard
	2025/10/27 22:38:49 Using in-cluster config to connect to apiserver
	2025/10/27 22:38:49 Using secret token for csrf signing
	2025/10/27 22:38:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 22:38:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 22:38:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 22:38:50 Generating JWE encryption key
	2025/10/27 22:38:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 22:38:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 22:38:50 Initializing JWE encryption key from synchronized object
	2025/10/27 22:38:50 Creating in-cluster Sidecar client
	2025/10/27 22:38:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:38:50 Serving insecurely on HTTP port: 9090
	2025/10/27 22:39:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a29b5b5b38c62144f9600bfdf7a35a1afbb4a79f4066d872710ac5cc46b01177] <==
	W1027 22:39:07.420934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:09.424123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:09.428181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:11.431506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:11.435259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:13.439780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:13.445189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:15.449023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:15.454034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:17.457580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:17.461223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:19.465649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:19.471036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:21.475301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:21.479068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:23.482505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:23.486968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:25.490335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:25.494766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:27.497588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:27.506369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:29.509581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:29.513739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:31.518349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:31.525914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bf414ccda38a64edfa3182d7b6c18f2e34500e5bba5df0ab6392597ef8eabd7d] <==
	I1027 22:38:39.244455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 22:38:39.248174       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-188814 -n no-preload-188814
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-188814 -n no-preload-188814: exit status 2 (361.832376ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-188814 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-927034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-927034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (282.465113ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:39:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-927034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-927034 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-927034 describe deploy/metrics-server -n kube-system: exit status 1 (83.875973ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-927034 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-927034
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-927034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a",
	        "Created": "2025-10-27T22:39:00.365066876Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 734896,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:39:00.397749253Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/hosts",
	        "LogPath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a-json.log",
	        "Name": "/default-k8s-diff-port-927034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-927034:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-927034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a",
	                "LowerDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-927034",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-927034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-927034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-927034",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-927034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d679768acea42efbba3be3aa4d81cba40c21492fd2fcb52efee947c2f2c1ca89",
	            "SandboxKey": "/var/run/docker/netns/d679768acea4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-927034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:a7:27:04:4a:af",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "25e72b99ac2bb46615ab3180c2d17b65b027e144e1892b4833bd16fb1b4eb32a",
	                    "EndpointID": "71279be141dc6c856c52e7c963412e583a2dc7b44c6d7a9c8b185ac7f2a2eac7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-927034",
	                        "d0fdd499dd47"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-927034 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-927034 logs -n 25: (1.099849742s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-188814 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p cert-expiration-219241                                                                                                                                                                                                                     │ cert-expiration-219241       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ addons  │ enable dashboard -p no-preload-188814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ image   │ old-k8s-version-908589 image list --format=json                                                                                                                                                                                               │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ pause   │ -p old-k8s-version-908589 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ delete  │ -p old-k8s-version-908589                                                                                                                                                                                                                     │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p old-k8s-version-908589                                                                                                                                                                                                                     │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p disable-driver-mounts-617659                                                                                                                                                                                                               │ disable-driver-mounts-617659 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-829976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ stop    │ -p embed-certs-829976 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ image   │ no-preload-188814 image list --format=json                                                                                                                                                                                                    │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ pause   │ -p no-preload-188814 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-829976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-695499                                                                                                                                                                                                                  │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ delete  │ -p no-preload-188814                                                                                                                                                                                                                          │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ delete  │ -p no-preload-188814                                                                                                                                                                                                                          │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p auto-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:39:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:39:37.907657  745063 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:39:37.908199  745063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:37.908215  745063 out.go:374] Setting ErrFile to fd 2...
	I1027 22:39:37.908222  745063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:37.908680  745063 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:39:37.909824  745063 out.go:368] Setting JSON to false
	I1027 22:39:37.911273  745063 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8517,"bootTime":1761596261,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:39:37.911366  745063 start.go:143] virtualization: kvm guest
	I1027 22:39:37.912926  745063 out.go:179] * [auto-293335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:39:37.914353  745063 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:39:37.914372  745063 notify.go:221] Checking for updates...
	I1027 22:39:37.916603  745063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:39:37.917757  745063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:39:37.918663  745063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:39:37.923403  745063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:39:37.924424  745063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:39:37.926044  745063 config.go:182] Loaded profile config "default-k8s-diff-port-927034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:37.926206  745063 config.go:182] Loaded profile config "embed-certs-829976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:37.926345  745063 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:37.926450  745063 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:39:37.952649  745063 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:39:37.952779  745063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:39:38.033095  745063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-27 22:39:38.021788444 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:39:38.033263  745063 docker.go:318] overlay module found
	I1027 22:39:38.034869  745063 out.go:179] * Using the docker driver based on user configuration
	I1027 22:39:38.035915  745063 start.go:307] selected driver: docker
	I1027 22:39:38.035933  745063 start.go:928] validating driver "docker" against <nil>
	I1027 22:39:38.035982  745063 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:39:38.036762  745063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:39:38.099032  745063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-27 22:39:38.089319282 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:39:38.099282  745063 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:39:38.099562  745063 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:39:38.101200  745063 out.go:179] * Using Docker driver with root privileges
	I1027 22:39:38.102471  745063 cni.go:84] Creating CNI manager for ""
	I1027 22:39:38.102546  745063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:38.102561  745063 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:39:38.102656  745063 start.go:351] cluster config:
	{Name:auto-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1027 22:39:38.103968  745063 out.go:179] * Starting "auto-293335" primary control-plane node in "auto-293335" cluster
	I1027 22:39:38.105034  745063 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:39:38.106512  745063 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:39:38.107610  745063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:38.107652  745063 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:39:38.107668  745063 cache.go:59] Caching tarball of preloaded images
	I1027 22:39:38.107683  745063 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:39:38.107772  745063 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:39:38.107788  745063 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:39:38.107939  745063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/config.json ...
	I1027 22:39:38.107981  745063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/config.json: {Name:mk1ae734ed5e8f20b380b41f1567a6de126721bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:38.135839  745063 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:39:38.135862  745063 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:39:38.135881  745063 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:39:38.135911  745063 start.go:360] acquireMachinesLock for auto-293335: {Name:mk68871849e580837d3f745ed8c659efb677566e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:39:38.136038  745063 start.go:364] duration metric: took 98.223µs to acquireMachinesLock for "auto-293335"
	I1027 22:39:38.136067  745063 start.go:93] Provisioning new machine with config: &{Name:auto-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-293335 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:39:38.136162  745063 start.go:125] createHost starting for "" (driver="docker")
	I1027 22:39:33.933303  743829 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:39:33.933495  743829 start.go:159] libmachine.API.Create for "newest-cni-290425" (driver="docker")
	I1027 22:39:33.933530  743829 client.go:173] LocalClient.Create starting
	I1027 22:39:33.933602  743829 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 22:39:33.933642  743829 main.go:143] libmachine: Decoding PEM data...
	I1027 22:39:33.933670  743829 main.go:143] libmachine: Parsing certificate...
	I1027 22:39:33.933734  743829 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 22:39:33.933760  743829 main.go:143] libmachine: Decoding PEM data...
	I1027 22:39:33.933773  743829 main.go:143] libmachine: Parsing certificate...
	I1027 22:39:33.934140  743829 cli_runner.go:164] Run: docker network inspect newest-cni-290425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:39:33.949802  743829 cli_runner.go:211] docker network inspect newest-cni-290425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:39:33.949871  743829 network_create.go:284] running [docker network inspect newest-cni-290425] to gather additional debugging logs...
	I1027 22:39:33.949902  743829 cli_runner.go:164] Run: docker network inspect newest-cni-290425
	W1027 22:39:33.965893  743829 cli_runner.go:211] docker network inspect newest-cni-290425 returned with exit code 1
	I1027 22:39:33.965917  743829 network_create.go:287] error running [docker network inspect newest-cni-290425]: docker network inspect newest-cni-290425: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-290425 not found
	I1027 22:39:33.965931  743829 network_create.go:289] output of [docker network inspect newest-cni-290425]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-290425 not found
	
	** /stderr **
	I1027 22:39:33.966038  743829 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:39:33.982177  743829 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
	I1027 22:39:33.983225  743829 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2deffb37428 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:63:99:4f:c9:29} reservation:<nil>}
	I1027 22:39:33.983747  743829 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8aa1ad217c0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:19:7b:f4:de:20} reservation:<nil>}
	I1027 22:39:33.984872  743829 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e1e840}
	I1027 22:39:33.984899  743829 network_create.go:124] attempt to create docker network newest-cni-290425 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 22:39:33.984958  743829 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-290425 newest-cni-290425
	I1027 22:39:34.045715  743829 network_create.go:108] docker network newest-cni-290425 192.168.76.0/24 created
	I1027 22:39:34.045770  743829 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-290425" container
	I1027 22:39:34.045851  743829 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:39:34.066068  743829 cli_runner.go:164] Run: docker volume create newest-cni-290425 --label name.minikube.sigs.k8s.io=newest-cni-290425 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:39:34.086990  743829 oci.go:103] Successfully created a docker volume newest-cni-290425
	I1027 22:39:34.087070  743829 cli_runner.go:164] Run: docker run --rm --name newest-cni-290425-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-290425 --entrypoint /usr/bin/test -v newest-cni-290425:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:39:34.459469  743829 oci.go:107] Successfully prepared a docker volume newest-cni-290425
	I1027 22:39:34.459516  743829 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:34.459541  743829 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:39:34.459621  743829 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-290425:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:39:37.592976  743829 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-290425:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.133296316s)
	I1027 22:39:37.593010  743829 kic.go:203] duration metric: took 3.133464845s to extract preloaded images to volume ...
	W1027 22:39:37.593110  743829 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 22:39:37.593143  743829 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 22:39:37.593189  743829 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:39:37.665112  743829 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-290425 --name newest-cni-290425 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-290425 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-290425 --network newest-cni-290425 --ip 192.168.76.2 --volume newest-cni-290425:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:39:37.964421  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Running}}
	I1027 22:39:37.989902  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:39:38.013162  743829 cli_runner.go:164] Run: docker exec newest-cni-290425 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:39:38.067961  743829 oci.go:144] the created container "newest-cni-290425" has a running status.
	I1027 22:39:38.068009  743829 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa...
	I1027 22:39:38.328842  743829 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:39:38.368128  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:39:38.395079  743829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:39:38.395104  743829 kic_runner.go:114] Args: [docker exec --privileged newest-cni-290425 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:39:38.446852  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:39:38.479364  743829 machine.go:94] provisionDockerMachine start ...
	I1027 22:39:38.479474  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:38.498217  743829 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:38.498583  743829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1027 22:39:38.498605  743829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:39:38.650131  743829 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:39:38.650178  743829 ubuntu.go:182] provisioning hostname "newest-cni-290425"
	I1027 22:39:38.650251  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:38.670632  743829 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:38.670872  743829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1027 22:39:38.670892  743829 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-290425 && echo "newest-cni-290425" | sudo tee /etc/hostname
	I1027 22:39:35.838269  741885 provision.go:177] copyRemoteCerts
	I1027 22:39:35.838344  741885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:39:35.838418  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:35.857553  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:35.959276  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:39:35.976202  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:39:35.993380  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:39:36.017798  741885 provision.go:87] duration metric: took 1.159127378s to configureAuth
	I1027 22:39:36.017829  741885 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:39:36.018047  741885 config.go:182] Loaded profile config "embed-certs-829976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:36.018192  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:36.038801  741885 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:36.039083  741885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1027 22:39:36.039100  741885 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:39:37.655975  741885 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:39:37.656017  741885 machine.go:97] duration metric: took 6.322399097s to provisionDockerMachine
	I1027 22:39:37.656033  741885 start.go:293] postStartSetup for "embed-certs-829976" (driver="docker")
	I1027 22:39:37.656049  741885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:39:37.656116  741885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:39:37.656181  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:37.677294  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:37.782059  741885 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:39:37.786201  741885 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:39:37.786226  741885 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:39:37.786259  741885 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:39:37.786302  741885 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:39:37.786446  741885 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:39:37.786584  741885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:39:37.797153  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:37.823183  741885 start.go:296] duration metric: took 167.131364ms for postStartSetup
	I1027 22:39:37.823258  741885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:39:37.823307  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:37.846970  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:37.947336  741885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:39:37.953047  741885 fix.go:57] duration metric: took 6.982605213s for fixHost
	I1027 22:39:37.953069  741885 start.go:83] releasing machines lock for "embed-certs-829976", held for 6.982652453s
	I1027 22:39:37.953120  741885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-829976
	I1027 22:39:37.973104  741885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:39:37.973153  741885 ssh_runner.go:195] Run: cat /version.json
	I1027 22:39:37.973361  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:37.973671  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:38.001027  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:38.002000  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:38.107298  741885 ssh_runner.go:195] Run: systemctl --version
	I1027 22:39:38.176625  741885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:39:38.223016  741885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:39:38.231583  741885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:39:38.231655  741885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:39:38.242587  741885 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:39:38.242614  741885 start.go:496] detecting cgroup driver to use...
	I1027 22:39:38.242645  741885 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:39:38.242690  741885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:39:38.258575  741885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:39:38.272695  741885 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:39:38.272747  741885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:39:38.295043  741885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:39:38.320744  741885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:39:38.435997  741885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:39:38.542975  741885 docker.go:234] disabling docker service ...
	I1027 22:39:38.543042  741885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:39:38.561360  741885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:39:38.576038  741885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:39:38.681167  741885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:39:38.784815  741885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:39:38.801099  741885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:39:38.817417  741885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:39:38.817478  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.828967  741885 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:39:38.829077  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.841914  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.855381  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.866643  741885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:39:38.876788  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.887193  741885 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.897906  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.909124  741885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:39:38.917811  741885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:39:38.926613  741885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:39.028599  741885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:39:39.207105  741885 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:39:39.207189  741885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:39:39.211509  741885 start.go:564] Will wait 60s for crictl version
	I1027 22:39:39.211579  741885 ssh_runner.go:195] Run: which crictl
	I1027 22:39:39.215967  741885 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:39:39.243580  741885 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:39:39.243665  741885 ssh_runner.go:195] Run: crio --version
	I1027 22:39:39.274022  741885 ssh_runner.go:195] Run: crio --version
	I1027 22:39:39.311072  741885 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:39:39.312032  741885 cli_runner.go:164] Run: docker network inspect embed-certs-829976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:39:39.329677  741885 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 22:39:39.334133  741885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:39.345082  741885 kubeadm.go:884] updating cluster {Name:embed-certs-829976 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:39:39.345249  741885 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:39.345317  741885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:39.384843  741885 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:39.384868  741885 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:39:39.384924  741885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:39.416336  741885 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:39.416359  741885 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:39:39.416371  741885 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 22:39:39.416491  741885 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-829976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:39:39.416567  741885 ssh_runner.go:195] Run: crio config
	I1027 22:39:39.469798  741885 cni.go:84] Creating CNI manager for ""
	I1027 22:39:39.469818  741885 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:39.469844  741885 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:39:39.469866  741885 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-829976 NodeName:embed-certs-829976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:39:39.470024  741885 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-829976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:39:39.470080  741885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:39:39.478204  741885 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:39:39.478257  741885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:39:39.486464  741885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 22:39:39.499120  741885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:39:39.512730  741885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 22:39:39.527236  741885 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:39:39.531231  741885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:39.541167  741885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:39.640644  741885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:39:39.667616  741885 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976 for IP: 192.168.85.2
	I1027 22:39:39.667636  741885 certs.go:195] generating shared ca certs ...
	I1027 22:39:39.667656  741885 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:39.667815  741885 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:39:39.667877  741885 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:39:39.667893  741885 certs.go:257] generating profile certs ...
	I1027 22:39:39.668037  741885 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.key
	I1027 22:39:39.668112  741885 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key.a2d2d0b7
	I1027 22:39:39.668178  741885 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key
	I1027 22:39:39.668325  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:39:39.668368  741885 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:39:39.668381  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:39:39.668413  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:39:39.668443  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:39:39.668478  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:39:39.668530  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:39.669365  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:39:39.688561  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:39:39.710071  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:39:39.731217  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:39:39.755360  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 22:39:39.777514  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:39:39.796584  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:39:39.814889  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:39:39.833080  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:39:39.853822  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:39:39.874289  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:39:39.892937  741885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:39:39.907370  741885 ssh_runner.go:195] Run: openssl version
	I1027 22:39:39.914622  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:39:39.923799  741885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:39.928421  741885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:39.928483  741885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:39.971736  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:39:39.981986  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:39:39.991321  741885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:39:39.995642  741885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:39:39.995707  741885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:39:40.035737  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:39:40.044485  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:39:40.054098  741885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:39:40.058621  741885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:39:40.058685  741885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:39:40.104829  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:39:40.116102  741885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:39:40.120531  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:39:40.159593  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:39:40.207550  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:39:40.254153  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:39:40.329849  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:39:40.388674  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:39:40.428586  741885 kubeadm.go:401] StartCluster: {Name:embed-certs-829976 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:39:40.428769  741885 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:39:40.428860  741885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:39:40.467562  741885 cri.go:89] found id: "2f44a2722d5ccd7616df1090c6bb0dbee4aa51ec06009ab3a0c5b8d4976586ea"
	I1027 22:39:40.467697  741885 cri.go:89] found id: "45a7ab4d457895149bd74409ca1cf2067d30d698e93850bc8e3ded4ce106bbab"
	I1027 22:39:40.467708  741885 cri.go:89] found id: "9ebb5d429db0f5d2cfac0c88b414dd785a0b2d57b9fcfeb926197b670710530b"
	I1027 22:39:40.467721  741885 cri.go:89] found id: "e617c18783204a4f1e575bdec7825512002bad31cb3b04208481ca9f4c563564"
	I1027 22:39:40.467726  741885 cri.go:89] found id: ""
	I1027 22:39:40.467789  741885 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:39:40.485174  741885 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:39:40Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:39:40.485268  741885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:39:40.496454  741885 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:39:40.496479  741885 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:39:40.496535  741885 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:39:40.506544  741885 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:39:40.507296  741885 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-829976" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:39:40.507626  741885 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-482142/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-829976" cluster setting kubeconfig missing "embed-certs-829976" context setting]
	I1027 22:39:40.508418  741885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:40.510378  741885 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:39:40.520956  741885 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 22:39:40.521003  741885 kubeadm.go:602] duration metric: took 24.515684ms to restartPrimaryControlPlane
	I1027 22:39:40.521017  741885 kubeadm.go:403] duration metric: took 92.448931ms to StartCluster
	I1027 22:39:40.521041  741885 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:40.521138  741885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:39:40.523005  741885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:40.523364  741885 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:39:40.523486  741885 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:39:40.523576  741885 config.go:182] Loaded profile config "embed-certs-829976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:40.523620  741885 addons.go:69] Setting dashboard=true in profile "embed-certs-829976"
	I1027 22:39:40.523620  741885 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-829976"
	I1027 22:39:40.523636  741885 addons.go:238] Setting addon dashboard=true in "embed-certs-829976"
	W1027 22:39:40.523646  741885 addons.go:247] addon dashboard should already be in state true
	I1027 22:39:40.523649  741885 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-829976"
	W1027 22:39:40.523658  741885 addons.go:247] addon storage-provisioner should already be in state true
	I1027 22:39:40.523675  741885 host.go:66] Checking if "embed-certs-829976" exists ...
	I1027 22:39:40.523678  741885 addons.go:69] Setting default-storageclass=true in profile "embed-certs-829976"
	I1027 22:39:40.523702  741885 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-829976"
	I1027 22:39:40.523709  741885 host.go:66] Checking if "embed-certs-829976" exists ...
	I1027 22:39:40.524246  741885 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:39:40.524296  741885 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:39:40.524690  741885 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:39:40.526828  741885 out.go:179] * Verifying Kubernetes components...
	I1027 22:39:40.528001  741885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:40.555044  741885 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:39:40.556196  741885 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:39:40.556281  741885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:39:40.556427  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:40.558748  741885 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 22:39:40.560001  741885 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 22:39:40.561077  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 22:39:40.561101  741885 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 22:39:40.561172  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:40.573106  741885 addons.go:238] Setting addon default-storageclass=true in "embed-certs-829976"
	W1027 22:39:40.573137  741885 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:39:40.573168  741885 host.go:66] Checking if "embed-certs-829976" exists ...
	I1027 22:39:40.573657  741885 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:39:40.594995  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:40.603541  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:40.615043  741885 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:39:40.615169  741885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:39:40.615293  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:40.643505  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:40.717559  741885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:39:38.141349  745063 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:39:38.141578  745063 start.go:159] libmachine.API.Create for "auto-293335" (driver="docker")
	I1027 22:39:38.141610  745063 client.go:173] LocalClient.Create starting
	I1027 22:39:38.141676  745063 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 22:39:38.141710  745063 main.go:143] libmachine: Decoding PEM data...
	I1027 22:39:38.141736  745063 main.go:143] libmachine: Parsing certificate...
	I1027 22:39:38.141811  745063 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 22:39:38.141842  745063 main.go:143] libmachine: Decoding PEM data...
	I1027 22:39:38.141857  745063 main.go:143] libmachine: Parsing certificate...
	I1027 22:39:38.142245  745063 cli_runner.go:164] Run: docker network inspect auto-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:39:38.160771  745063 cli_runner.go:211] docker network inspect auto-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:39:38.160847  745063 network_create.go:284] running [docker network inspect auto-293335] to gather additional debugging logs...
	I1027 22:39:38.160869  745063 cli_runner.go:164] Run: docker network inspect auto-293335
	W1027 22:39:38.178198  745063 cli_runner.go:211] docker network inspect auto-293335 returned with exit code 1
	I1027 22:39:38.178229  745063 network_create.go:287] error running [docker network inspect auto-293335]: docker network inspect auto-293335: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-293335 not found
	I1027 22:39:38.178243  745063 network_create.go:289] output of [docker network inspect auto-293335]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-293335 not found
	
	** /stderr **
	I1027 22:39:38.178353  745063 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:39:38.199445  745063 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
	I1027 22:39:38.200534  745063 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2deffb37428 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:63:99:4f:c9:29} reservation:<nil>}
	I1027 22:39:38.201127  745063 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8aa1ad217c0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:19:7b:f4:de:20} reservation:<nil>}
	I1027 22:39:38.202096  745063 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-882fc6de2a09 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:5e:7d:03:a2:c4} reservation:<nil>}
	I1027 22:39:38.202891  745063 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-19326983879b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:fd:92:c2:f9:aa} reservation:<nil>}
	I1027 22:39:38.204168  745063 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e68010}
	I1027 22:39:38.204196  745063 network_create.go:124] attempt to create docker network auto-293335 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1027 22:39:38.204250  745063 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-293335 auto-293335
	I1027 22:39:38.277836  745063 network_create.go:108] docker network auto-293335 192.168.94.0/24 created
	I1027 22:39:38.277870  745063 kic.go:121] calculated static IP "192.168.94.2" for the "auto-293335" container
	I1027 22:39:38.277985  745063 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:39:38.304523  745063 cli_runner.go:164] Run: docker volume create auto-293335 --label name.minikube.sigs.k8s.io=auto-293335 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:39:38.329794  745063 oci.go:103] Successfully created a docker volume auto-293335
	I1027 22:39:38.329903  745063 cli_runner.go:164] Run: docker run --rm --name auto-293335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-293335 --entrypoint /usr/bin/test -v auto-293335:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:39:38.785518  745063 oci.go:107] Successfully prepared a docker volume auto-293335
	I1027 22:39:38.785571  745063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:38.785600  745063 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:39:38.785671  745063 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-293335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:39:38.853899  743829 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:39:38.854040  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:38.876125  743829 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:38.876435  743829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1027 22:39:38.876467  743829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-290425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-290425/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-290425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:39:39.038446  743829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:39:39.038475  743829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:39:39.038499  743829 ubuntu.go:190] setting up certificates
	I1027 22:39:39.038512  743829 provision.go:84] configureAuth start
	I1027 22:39:39.038584  743829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:39:39.062122  743829 provision.go:143] copyHostCerts
	I1027 22:39:39.062191  743829 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:39:39.062206  743829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:39:39.062274  743829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:39:39.062390  743829 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:39:39.062405  743829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:39:39.062447  743829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:39:39.062543  743829 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:39:39.062556  743829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:39:39.062596  743829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:39:39.062676  743829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-290425 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-290425]
	I1027 22:39:39.443175  743829 provision.go:177] copyRemoteCerts
	I1027 22:39:39.443253  743829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:39:39.443303  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:39.463198  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:39:39.566832  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:39:39.593835  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:39:39.611167  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:39:39.629365  743829 provision.go:87] duration metric: took 590.834285ms to configureAuth
	I1027 22:39:39.629397  743829 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:39:39.629604  743829 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:39.629730  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:39.649412  743829 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:39.649701  743829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1027 22:39:39.649721  743829 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:39:39.927025  743829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:39:39.927055  743829 machine.go:97] duration metric: took 1.447661976s to provisionDockerMachine
	I1027 22:39:39.927068  743829 client.go:176] duration metric: took 5.993527223s to LocalClient.Create
	I1027 22:39:39.927092  743829 start.go:167] duration metric: took 5.99359595s to libmachine.API.Create "newest-cni-290425"
	I1027 22:39:39.927104  743829 start.go:293] postStartSetup for "newest-cni-290425" (driver="docker")
	I1027 22:39:39.927116  743829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:39:39.927182  743829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:39:39.927232  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:39.949573  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:39:40.054515  743829 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:39:40.058505  743829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:39:40.058541  743829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:39:40.058554  743829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:39:40.058614  743829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:39:40.058714  743829 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:39:40.058835  743829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:39:40.067864  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:40.090763  743829 start.go:296] duration metric: took 163.640069ms for postStartSetup
	I1027 22:39:40.091158  743829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:39:40.112212  743829 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/config.json ...
	I1027 22:39:40.112569  743829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:39:40.112633  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:40.134615  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:39:40.237968  743829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:39:40.245464  743829 start.go:128] duration metric: took 6.313703068s to createHost
	I1027 22:39:40.245515  743829 start.go:83] releasing machines lock for "newest-cni-290425", held for 6.31386504s
	I1027 22:39:40.245615  743829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:39:40.269751  743829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:39:40.269792  743829 ssh_runner.go:195] Run: cat /version.json
	I1027 22:39:40.269838  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:40.270209  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:40.305189  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:39:40.306212  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:39:40.490711  743829 ssh_runner.go:195] Run: systemctl --version
	I1027 22:39:40.500153  743829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:39:40.556539  743829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:39:40.565175  743829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:39:40.565252  743829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:39:40.633796  743829 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:39:40.633822  743829 start.go:496] detecting cgroup driver to use...
	I1027 22:39:40.633862  743829 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:39:40.633922  743829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:39:40.662253  743829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:39:40.680485  743829 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:39:40.680555  743829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:39:40.705934  743829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:39:40.733851  743829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:39:40.880360  743829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:39:41.016251  743829 docker.go:234] disabling docker service ...
	I1027 22:39:41.016318  743829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:39:41.045061  743829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:39:41.062254  743829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:39:41.186133  743829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:39:41.287335  743829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:39:41.301710  743829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:39:41.320170  743829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:39:41.320246  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.333068  743829 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:39:41.333140  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.342773  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.352724  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.366213  743829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:39:41.378439  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.391815  743829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.411149  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.424575  743829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:39:41.433671  743829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:39:41.444172  743829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:41.566487  743829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:39:40.736422  741885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:39:40.737668  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 22:39:40.737690  741885 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 22:39:40.739330  741885 node_ready.go:35] waiting up to 6m0s for node "embed-certs-829976" to be "Ready" ...
	I1027 22:39:40.758148  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 22:39:40.758241  741885 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 22:39:40.784076  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 22:39:40.784116  741885 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 22:39:40.801369  741885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:39:40.816291  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 22:39:40.816318  741885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 22:39:40.835377  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 22:39:40.835407  741885 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 22:39:40.863565  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 22:39:40.863596  741885 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 22:39:40.883782  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 22:39:40.883817  741885 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 22:39:40.904058  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 22:39:40.904086  741885 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 22:39:40.931506  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:39:40.931534  741885 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 22:39:40.953268  741885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:39:42.383129  741885 node_ready.go:49] node "embed-certs-829976" is "Ready"
	I1027 22:39:42.383171  741885 node_ready.go:38] duration metric: took 1.643810214s for node "embed-certs-829976" to be "Ready" ...
	I1027 22:39:42.383198  741885 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:39:42.383259  741885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:39:43.393739  741885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.657274347s)
	I1027 22:39:43.393766  741885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.592355735s)
	I1027 22:39:43.884265  741885 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.50098346s)
	I1027 22:39:43.884284  741885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.93092816s)
	I1027 22:39:43.884305  741885 api_server.go:72] duration metric: took 3.360898246s to wait for apiserver process to appear ...
	I1027 22:39:43.884314  741885 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:39:43.884337  741885 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 22:39:43.885657  741885 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-829976 addons enable metrics-server
	
	I1027 22:39:43.887429  741885 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1027 22:39:43.868160  743829 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.301623825s)
	I1027 22:39:43.868205  743829 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:39:43.868258  743829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:39:43.874444  743829 start.go:564] Will wait 60s for crictl version
	I1027 22:39:43.874521  743829 ssh_runner.go:195] Run: which crictl
	I1027 22:39:43.879487  743829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:39:43.920116  743829 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:39:43.920240  743829 ssh_runner.go:195] Run: crio --version
	I1027 22:39:43.958916  743829 ssh_runner.go:195] Run: crio --version
	I1027 22:39:43.996688  743829 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:39:43.997891  743829 cli_runner.go:164] Run: docker network inspect newest-cni-290425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:39:44.020082  743829 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 22:39:44.024301  743829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:44.102757  743829 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1027 22:39:43.888680  741885 addons.go:514] duration metric: took 3.365238396s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1027 22:39:43.890605  741885 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:39:43.890627  741885 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:39:44.385109  741885 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 22:39:44.391373  741885 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 22:39:44.392829  741885 api_server.go:141] control plane version: v1.34.1
	I1027 22:39:44.393268  741885 api_server.go:131] duration metric: took 508.94097ms to wait for apiserver health ...
	I1027 22:39:44.393301  741885 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:39:44.398759  741885 system_pods.go:59] 8 kube-system pods found
	I1027 22:39:44.398916  741885 system_pods.go:61] "coredns-66bc5c9577-msbj9" [eabc58bc-8437-422d-bed2-b0d37d4bb14b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:39:44.398934  741885 system_pods.go:61] "etcd-embed-certs-829976" [4c420d10-88b4-4e9b-8edc-73a2bcb14fe3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:39:44.398954  741885 system_pods.go:61] "kindnet-dtjql" [8e75d998-47cc-4e2c-b1d2-7b6069c821f8] Running
	I1027 22:39:44.398978  741885 system_pods.go:61] "kube-apiserver-embed-certs-829976" [dab60253-4b47-45bc-a7d0-21de852d913c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:39:44.398996  741885 system_pods.go:61] "kube-controller-manager-embed-certs-829976" [434b07e1-c7e4-41f9-a8de-5d24091f627c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:39:44.399003  741885 system_pods.go:61] "kube-proxy-gf725" [3751b38d-bae6-4ea8-9669-346eb3fd7457] Running
	I1027 22:39:44.399016  741885 system_pods.go:61] "kube-scheduler-embed-certs-829976" [479c9aa0-d1dd-416c-94fe-53a85d338715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:39:44.399023  741885 system_pods.go:61] "storage-provisioner" [fcbb9eb6-2144-438f-abf4-a4bd189f88f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:39:44.399033  741885 system_pods.go:74] duration metric: took 5.699715ms to wait for pod list to return data ...
	I1027 22:39:44.399052  741885 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:39:44.402736  741885 default_sa.go:45] found service account: "default"
	I1027 22:39:44.402819  741885 default_sa.go:55] duration metric: took 3.757403ms for default service account to be created ...
	I1027 22:39:44.402835  741885 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:39:44.405674  741885 system_pods.go:86] 8 kube-system pods found
	I1027 22:39:44.405750  741885 system_pods.go:89] "coredns-66bc5c9577-msbj9" [eabc58bc-8437-422d-bed2-b0d37d4bb14b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:39:44.405766  741885 system_pods.go:89] "etcd-embed-certs-829976" [4c420d10-88b4-4e9b-8edc-73a2bcb14fe3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:39:44.405775  741885 system_pods.go:89] "kindnet-dtjql" [8e75d998-47cc-4e2c-b1d2-7b6069c821f8] Running
	I1027 22:39:44.405787  741885 system_pods.go:89] "kube-apiserver-embed-certs-829976" [dab60253-4b47-45bc-a7d0-21de852d913c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:39:44.405818  741885 system_pods.go:89] "kube-controller-manager-embed-certs-829976" [434b07e1-c7e4-41f9-a8de-5d24091f627c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:39:44.405927  741885 system_pods.go:89] "kube-proxy-gf725" [3751b38d-bae6-4ea8-9669-346eb3fd7457] Running
	I1027 22:39:44.405980  741885 system_pods.go:89] "kube-scheduler-embed-certs-829976" [479c9aa0-d1dd-416c-94fe-53a85d338715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:39:44.405998  741885 system_pods.go:89] "storage-provisioner" [fcbb9eb6-2144-438f-abf4-a4bd189f88f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:39:44.406007  741885 system_pods.go:126] duration metric: took 3.165876ms to wait for k8s-apps to be running ...
	I1027 22:39:44.406016  741885 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:39:44.406075  741885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:39:44.423551  741885 system_svc.go:56] duration metric: took 17.525198ms WaitForService to wait for kubelet
	I1027 22:39:44.423621  741885 kubeadm.go:587] duration metric: took 3.900213782s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:39:44.423649  741885 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:39:44.426477  741885 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:39:44.426508  741885 node_conditions.go:123] node cpu capacity is 8
	I1027 22:39:44.426525  741885 node_conditions.go:105] duration metric: took 2.868803ms to run NodePressure ...
	I1027 22:39:44.426540  741885 start.go:242] waiting for startup goroutines ...
	I1027 22:39:44.426550  741885 start.go:247] waiting for cluster config update ...
	I1027 22:39:44.426571  741885 start.go:256] writing updated cluster config ...
	I1027 22:39:44.426887  741885 ssh_runner.go:195] Run: rm -f paused
	I1027 22:39:44.432668  741885 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:39:44.441454  741885 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-msbj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:44.104817  743829 kubeadm.go:884] updating cluster {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:39:44.105013  743829 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:44.105099  743829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:44.156373  743829 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:44.156403  743829 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:39:44.156472  743829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:44.185867  743829 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:44.185891  743829 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:39:44.185899  743829 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 22:39:44.186028  743829 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-290425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:39:44.186125  743829 ssh_runner.go:195] Run: crio config
	I1027 22:39:44.234853  743829 cni.go:84] Creating CNI manager for ""
	I1027 22:39:44.234876  743829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:44.234904  743829 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 22:39:44.234934  743829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-290425 NodeName:newest-cni-290425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:39:44.235129  743829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-290425"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:39:44.235214  743829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:39:44.243674  743829 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:39:44.243768  743829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:39:44.251841  743829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 22:39:44.268218  743829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:39:44.288231  743829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1027 22:39:44.305390  743829 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:39:44.310176  743829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:44.322997  743829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:44.436799  743829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:39:44.466259  743829 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425 for IP: 192.168.76.2
	I1027 22:39:44.466327  743829 certs.go:195] generating shared ca certs ...
	I1027 22:39:44.466374  743829 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:44.466566  743829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:39:44.466635  743829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:39:44.466646  743829 certs.go:257] generating profile certs ...
	I1027 22:39:44.466714  743829 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.key
	I1027 22:39:44.466727  743829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.crt with IP's: []
	I1027 22:39:44.658433  743829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.crt ...
	I1027 22:39:44.658466  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.crt: {Name:mk52bb5b0c9e51e109632c9ea2227777d91b7aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:44.658625  743829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.key ...
	I1027 22:39:44.658645  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.key: {Name:mkf30bcc1c690649895c4ff985af3da1c7fa30b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:44.658784  743829 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67
	I1027 22:39:44.658807  743829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt.46af5a67 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 22:39:44.880190  743829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt.46af5a67 ...
	I1027 22:39:44.880222  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt.46af5a67: {Name:mk685cc4ab6a3b8ac44496e69c7626f728be2214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:44.880392  743829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67 ...
	I1027 22:39:44.880410  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67: {Name:mkc70edbe60feebf38e5d81382cc70bec4258b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:44.880535  743829 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt.46af5a67 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt
	I1027 22:39:44.880651  743829 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key
	I1027 22:39:44.880741  743829 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key
	I1027 22:39:44.880766  743829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt with IP's: []
	I1027 22:39:45.629017  743829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt ...
	I1027 22:39:45.629044  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt: {Name:mk31788c60402132a2fbf20f2a07e83085ee1b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:45.629222  743829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key ...
	I1027 22:39:45.629237  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key: {Name:mk627c14beb41c896c195dd19330f61236072a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:45.629422  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:39:45.629460  743829 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:39:45.629470  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:39:45.629494  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:39:45.629516  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:39:45.629536  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:39:45.629573  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:45.630159  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:39:45.648938  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:39:45.667176  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:39:45.685148  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:39:45.709657  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 22:39:45.735490  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:39:45.754144  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:39:45.772826  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:39:45.790377  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:39:45.811457  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:39:45.829179  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:39:45.848049  743829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:39:45.861910  743829 ssh_runner.go:195] Run: openssl version
	I1027 22:39:45.869071  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:39:45.877902  743829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:39:45.881602  743829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:39:45.881653  743829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:39:45.921990  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:39:45.932750  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:39:45.944674  743829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:45.949325  743829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:45.949373  743829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:45.988788  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:39:45.999585  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:39:46.009637  743829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:39:46.013527  743829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:39:46.013601  743829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:39:46.050768  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:39:46.059139  743829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:39:46.062819  743829 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:39:46.062879  743829 kubeadm.go:401] StartCluster: {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:39:46.062982  743829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:39:46.063030  743829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:39:46.091173  743829 cri.go:89] found id: ""
	I1027 22:39:46.091284  743829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:39:46.099380  743829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:39:46.107380  743829 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:39:46.107438  743829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:39:46.115813  743829 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:39:46.115827  743829 kubeadm.go:158] found existing configuration files:
	
	I1027 22:39:46.115867  743829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:39:46.123572  743829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:39:46.123626  743829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:39:46.131366  743829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:39:46.138690  743829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:39:46.138742  743829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:39:46.145973  743829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:39:46.153392  743829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:39:46.153439  743829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:39:46.160171  743829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:39:46.167208  743829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:39:46.167251  743829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:39:46.174088  743829 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:39:46.213498  743829 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:39:46.213552  743829 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:39:46.235328  743829 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:39:46.235414  743829 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:39:46.235463  743829 kubeadm.go:319] OS: Linux
	I1027 22:39:46.235535  743829 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:39:46.235657  743829 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:39:46.235742  743829 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:39:46.235823  743829 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:39:46.235897  743829 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:39:46.236016  743829 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:39:46.236085  743829 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:39:46.236150  743829 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:39:46.303608  743829 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:39:46.303703  743829 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:39:46.303781  743829 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:39:46.314237  743829 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 27 22:39:33 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:33.73463371Z" level=info msg="Starting container: 7a8ed4135a7670a7e40dfce2faf03eaf63710b871ee015aa91306541ec134835" id=2404f171-ee69-456f-9988-415020cb2116 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:39:33 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:33.736724612Z" level=info msg="Started container" PID=1876 containerID=7a8ed4135a7670a7e40dfce2faf03eaf63710b871ee015aa91306541ec134835 description=kube-system/coredns-66bc5c9577-bvr8f/coredns id=2404f171-ee69-456f-9988-415020cb2116 name=/runtime.v1.RuntimeService/StartContainer sandboxID=98cecf15b9849247b371691bdec3468c5498618d137a1b978c3113d5aadcfe72
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.740888277Z" level=info msg="Running pod sandbox: default/busybox/POD" id=22731e84-369e-4a9a-815f-726701f1db0b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.741031612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.803456628Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a6286d33f366c9cee331a51a46a98169cd6df9a67ac82fa63de02ce15265d2eb UID:cbed7aab-1041-41f4-a104-e6676919cc97 NetNS:/var/run/netns/438f757f-0ed6-44b5-b2bd-e3108afe833b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00082c350}] Aliases:map[]}"
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.803502692Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.813982105Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a6286d33f366c9cee331a51a46a98169cd6df9a67ac82fa63de02ce15265d2eb UID:cbed7aab-1041-41f4-a104-e6676919cc97 NetNS:/var/run/netns/438f757f-0ed6-44b5-b2bd-e3108afe833b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00082c350}] Aliases:map[]}"
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.814219111Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.815011608Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.816088188Z" level=info msg="Ran pod sandbox a6286d33f366c9cee331a51a46a98169cd6df9a67ac82fa63de02ce15265d2eb with infra container: default/busybox/POD" id=22731e84-369e-4a9a-815f-726701f1db0b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.81744316Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0ed6b25-1013-49d8-bd8b-52b40c50847e name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.817593512Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f0ed6b25-1013-49d8-bd8b-52b40c50847e name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.817644953Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f0ed6b25-1013-49d8-bd8b-52b40c50847e name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.818628073Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=796b8f3c-bbe9-4ede-9cc5-c15621bb8eec name=/runtime.v1.ImageService/PullImage
	Oct 27 22:39:36 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:36.822125859Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 22:39:39 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:39.019028882Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=796b8f3c-bbe9-4ede-9cc5-c15621bb8eec name=/runtime.v1.ImageService/PullImage
	Oct 27 22:39:39 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:39.019986317Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=17b5eaf0-b035-428b-ad6e-7e6b9de6568a name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:39 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:39.021487861Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aadf555e-6a69-4ea9-987f-c56548327bcc name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:39:39 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:39.024819416Z" level=info msg="Creating container: default/busybox/busybox" id=e196116c-12fc-4e89-b11a-97da0868fef3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:39:39 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:39.024981936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:39 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:39.029360025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:39 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:39.029920817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:39:39 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:39.075292962Z" level=info msg="Created container f8aec9bc49fbc1bb4c3781c6a6462ccb852e52097b5c8b7be52b3ffc42ac5b99: default/busybox/busybox" id=e196116c-12fc-4e89-b11a-97da0868fef3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:39:39 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:39.075915954Z" level=info msg="Starting container: f8aec9bc49fbc1bb4c3781c6a6462ccb852e52097b5c8b7be52b3ffc42ac5b99" id=efd4b067-757e-44b3-9244-2f472946e82d name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:39:39 default-k8s-diff-port-927034 crio[776]: time="2025-10-27T22:39:39.078295488Z" level=info msg="Started container" PID=1951 containerID=f8aec9bc49fbc1bb4c3781c6a6462ccb852e52097b5c8b7be52b3ffc42ac5b99 description=default/busybox/busybox id=efd4b067-757e-44b3-9244-2f472946e82d name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6286d33f366c9cee331a51a46a98169cd6df9a67ac82fa63de02ce15265d2eb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	f8aec9bc49fbc       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   a6286d33f366c       busybox                                                default
	7a8ed4135a767       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago      Running             coredns                   0                   98cecf15b9849       coredns-66bc5c9577-bvr8f                               kube-system
	874485b15c9c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   fe64eae61ee11       storage-provisioner                                    kube-system
	4b989bb30d0f5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   bebfbacbc83a8       kindnet-94cw9                                          kube-system
	49e368b65ee72       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   45586a40ba707       kube-proxy-42dj4                                       kube-system
	4720f6a515813       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   73a4fd62807ff       etcd-default-k8s-diff-port-927034                      kube-system
	1210082bafb94       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   3e4b3d95793c9       kube-scheduler-default-k8s-diff-port-927034            kube-system
	57c7f24912739       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   d320cb13c1a27       kube-controller-manager-default-k8s-diff-port-927034   kube-system
	629c2a223d81a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   177a7a2eba915       kube-apiserver-default-k8s-diff-port-927034            kube-system
	
	
	==> coredns [7a8ed4135a7670a7e40dfce2faf03eaf63710b871ee015aa91306541ec134835] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56222 - 61336 "HINFO IN 6372316105996071452.3665652708161299156. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026092441s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-927034
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-927034
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=default-k8s-diff-port-927034
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_39_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:39:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-927034
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:39:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:39:46 +0000   Mon, 27 Oct 2025 22:39:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:39:46 +0000   Mon, 27 Oct 2025 22:39:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:39:46 +0000   Mon, 27 Oct 2025 22:39:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:39:46 +0000   Mon, 27 Oct 2025 22:39:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-927034
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                bea60602-4e46-4583-a378-a857a2ae88ea
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-bvr8f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-927034                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-94cw9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-927034             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-927034    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-42dj4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-927034             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node default-k8s-diff-port-927034 event: Registered Node default-k8s-diff-port-927034 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-927034 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [4720f6a5158130b91ab3a663e33b97663b3861d3f19db0fcb492d49d1f69bc1e] <==
	{"level":"warn","ts":"2025-10-27T22:39:12.837594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.845732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.851988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.859896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.866420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.873370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.881601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.889665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.897196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.906059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.913306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.920652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.928793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.936093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.943487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.951303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.959452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.967174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.976275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:12.983014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:13.004306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:13.008876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:13.016258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:13.022909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:13.077772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33564","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:39:47 up  2:22,  0 user,  load average: 4.27, 2.91, 2.81
	Linux default-k8s-diff-port-927034 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4b989bb30d0f5db7fc09577326c29040fddd6f56ebe18e8b2cac8220795b1637] <==
	I1027 22:39:22.479799       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:39:22.480087       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1027 22:39:22.480241       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:39:22.480261       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:39:22.480296       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:39:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:39:22.772467       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:39:22.772851       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:39:22.772870       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:39:22.773196       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:39:23.173112       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:39:23.173150       1 metrics.go:72] Registering metrics
	I1027 22:39:23.173220       1 controller.go:711] "Syncing nftables rules"
	I1027 22:39:32.778045       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:39:32.778112       1 main.go:301] handling current node
	I1027 22:39:42.773035       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:39:42.773076       1 main.go:301] handling current node
	
	
	==> kube-apiserver [629c2a223d81ac7dbce4eb7b1f21dab2203bd4a57bb992abd0086bf0c69d8d83] <==
	I1027 22:39:13.737187       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 22:39:13.737190       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:39:13.741026       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:39:13.741144       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1027 22:39:13.769293       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1027 22:39:13.812861       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:39:13.913790       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:39:14.616544       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 22:39:14.620330       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 22:39:14.620346       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:39:15.071018       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:39:15.108120       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:39:15.220842       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 22:39:15.226556       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1027 22:39:15.227736       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:39:15.231820       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:39:15.799545       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:39:16.125986       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:39:16.136917       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 22:39:16.145899       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:39:21.152657       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:39:21.803138       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:39:21.807595       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:39:21.901642       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1027 22:39:46.400261       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:43826: use of closed network connection
	
	
	==> kube-controller-manager [57c7f249127392643d0d0e0d3997904c7f787249dd556e07683ebea4d39c5052] <==
	I1027 22:39:20.847262       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 22:39:20.847295       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:39:20.847316       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 22:39:20.847386       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 22:39:20.847404       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:39:20.847495       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-927034"
	I1027 22:39:20.847550       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 22:39:20.847835       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:39:20.848029       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:39:20.848030       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 22:39:20.848217       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:39:20.848222       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 22:39:20.848474       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:39:20.848571       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:39:20.848608       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 22:39:20.848697       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 22:39:20.849826       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 22:39:20.850398       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 22:39:20.850528       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 22:39:20.850595       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:39:20.850644       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:39:20.850650       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:39:20.850657       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:39:20.857579       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-927034" podCIDRs=["10.244.0.0/24"]
	I1027 22:39:35.849272       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [49e368b65ee72bfb02cd71cb3cc32605da9a3375b27d82ca18c9a4f6e9b4eb95] <==
	I1027 22:39:22.337265       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:39:22.403440       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:39:22.503596       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:39:22.503634       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1027 22:39:22.503752       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:39:22.524168       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:39:22.524241       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:39:22.529381       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:39:22.529840       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:39:22.529880       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:39:22.531606       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:39:22.531683       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:39:22.531662       1 config.go:200] "Starting service config controller"
	I1027 22:39:22.531709       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:39:22.531726       1 config.go:309] "Starting node config controller"
	I1027 22:39:22.531730       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:39:22.531734       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:39:22.531708       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:39:22.632114       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:39:22.632129       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:39:22.632150       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:39:22.632165       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1210082bafb9491b77045f58d89fa32351fd419b4ea7b4cd8c43de2f28af028e] <==
	E1027 22:39:13.681283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:39:13.681378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:39:13.681445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:39:13.681551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:39:13.681560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 22:39:13.681640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:39:13.681719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:39:13.681792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:39:13.681902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:39:13.681982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:39:13.682116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:39:13.682147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:39:13.682487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:39:13.682500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:39:13.682637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:39:14.512898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:39:14.543123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:39:14.552228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:39:14.696343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:39:14.720440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:39:14.775738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 22:39:14.790104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:39:14.824179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:39:14.876344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1027 22:39:17.576764       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:39:17 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:17.034540    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-927034" podStartSLOduration=1.03451963 podStartE2EDuration="1.03451963s" podCreationTimestamp="2025-10-27 22:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:17.03434149 +0000 UTC m=+1.133836199" watchObservedRunningTime="2025-10-27 22:39:17.03451963 +0000 UTC m=+1.134014308"
	Oct 27 22:39:17 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:17.054467    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-927034" podStartSLOduration=1.054446245 podStartE2EDuration="1.054446245s" podCreationTimestamp="2025-10-27 22:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:17.044862099 +0000 UTC m=+1.144356792" watchObservedRunningTime="2025-10-27 22:39:17.054446245 +0000 UTC m=+1.153940926"
	Oct 27 22:39:17 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:17.054657    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-927034" podStartSLOduration=1.054648943 podStartE2EDuration="1.054648943s" podCreationTimestamp="2025-10-27 22:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:17.054626438 +0000 UTC m=+1.154121116" watchObservedRunningTime="2025-10-27 22:39:17.054648943 +0000 UTC m=+1.154143621"
	Oct 27 22:39:17 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:17.063054    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-927034" podStartSLOduration=1.063031128 podStartE2EDuration="1.063031128s" podCreationTimestamp="2025-10-27 22:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:17.062880391 +0000 UTC m=+1.162375072" watchObservedRunningTime="2025-10-27 22:39:17.063031128 +0000 UTC m=+1.162525809"
	Oct 27 22:39:20 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:20.934217    1333 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 22:39:20 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:20.934974    1333 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 22:39:22 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:22.017592    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d404d021-c91d-4f94-809d-3db640009943-kube-proxy\") pod \"kube-proxy-42dj4\" (UID: \"d404d021-c91d-4f94-809d-3db640009943\") " pod="kube-system/kube-proxy-42dj4"
	Oct 27 22:39:22 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:22.017641    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqcst\" (UniqueName: \"kubernetes.io/projected/d404d021-c91d-4f94-809d-3db640009943-kube-api-access-lqcst\") pod \"kube-proxy-42dj4\" (UID: \"d404d021-c91d-4f94-809d-3db640009943\") " pod="kube-system/kube-proxy-42dj4"
	Oct 27 22:39:22 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:22.017670    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65dd1953-fb73-4e45-82f7-f96c21f0be5e-lib-modules\") pod \"kindnet-94cw9\" (UID: \"65dd1953-fb73-4e45-82f7-f96c21f0be5e\") " pod="kube-system/kindnet-94cw9"
	Oct 27 22:39:22 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:22.017780    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d404d021-c91d-4f94-809d-3db640009943-xtables-lock\") pod \"kube-proxy-42dj4\" (UID: \"d404d021-c91d-4f94-809d-3db640009943\") " pod="kube-system/kube-proxy-42dj4"
	Oct 27 22:39:22 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:22.017870    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d404d021-c91d-4f94-809d-3db640009943-lib-modules\") pod \"kube-proxy-42dj4\" (UID: \"d404d021-c91d-4f94-809d-3db640009943\") " pod="kube-system/kube-proxy-42dj4"
	Oct 27 22:39:22 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:22.017910    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65dd1953-fb73-4e45-82f7-f96c21f0be5e-xtables-lock\") pod \"kindnet-94cw9\" (UID: \"65dd1953-fb73-4e45-82f7-f96c21f0be5e\") " pod="kube-system/kindnet-94cw9"
	Oct 27 22:39:22 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:22.017973    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/65dd1953-fb73-4e45-82f7-f96c21f0be5e-cni-cfg\") pod \"kindnet-94cw9\" (UID: \"65dd1953-fb73-4e45-82f7-f96c21f0be5e\") " pod="kube-system/kindnet-94cw9"
	Oct 27 22:39:22 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:22.017999    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6xn\" (UniqueName: \"kubernetes.io/projected/65dd1953-fb73-4e45-82f7-f96c21f0be5e-kube-api-access-4w6xn\") pod \"kindnet-94cw9\" (UID: \"65dd1953-fb73-4e45-82f7-f96c21f0be5e\") " pod="kube-system/kindnet-94cw9"
	Oct 27 22:39:23 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:23.030672    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-94cw9" podStartSLOduration=2.030654347 podStartE2EDuration="2.030654347s" podCreationTimestamp="2025-10-27 22:39:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:23.030383261 +0000 UTC m=+7.129877954" watchObservedRunningTime="2025-10-27 22:39:23.030654347 +0000 UTC m=+7.130149028"
	Oct 27 22:39:23 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:23.039995    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-42dj4" podStartSLOduration=2.039977055 podStartE2EDuration="2.039977055s" podCreationTimestamp="2025-10-27 22:39:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:23.039876468 +0000 UTC m=+7.139371238" watchObservedRunningTime="2025-10-27 22:39:23.039977055 +0000 UTC m=+7.139471735"
	Oct 27 22:39:33 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:33.324226    1333 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 22:39:33 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:33.397738    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e6d771f8-0e4b-45e7-b109-781d0461cc95-tmp\") pod \"storage-provisioner\" (UID: \"e6d771f8-0e4b-45e7-b109-781d0461cc95\") " pod="kube-system/storage-provisioner"
	Oct 27 22:39:33 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:33.397794    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bef45f4e-52c9-4cc3-ba56-2d11255107fe-config-volume\") pod \"coredns-66bc5c9577-bvr8f\" (UID: \"bef45f4e-52c9-4cc3-ba56-2d11255107fe\") " pod="kube-system/coredns-66bc5c9577-bvr8f"
	Oct 27 22:39:33 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:33.397827    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rbh\" (UniqueName: \"kubernetes.io/projected/e6d771f8-0e4b-45e7-b109-781d0461cc95-kube-api-access-99rbh\") pod \"storage-provisioner\" (UID: \"e6d771f8-0e4b-45e7-b109-781d0461cc95\") " pod="kube-system/storage-provisioner"
	Oct 27 22:39:33 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:33.397871    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwqfs\" (UniqueName: \"kubernetes.io/projected/bef45f4e-52c9-4cc3-ba56-2d11255107fe-kube-api-access-gwqfs\") pod \"coredns-66bc5c9577-bvr8f\" (UID: \"bef45f4e-52c9-4cc3-ba56-2d11255107fe\") " pod="kube-system/coredns-66bc5c9577-bvr8f"
	Oct 27 22:39:34 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:34.061326    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.061303917 podStartE2EDuration="12.061303917s" podCreationTimestamp="2025-10-27 22:39:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:34.061285828 +0000 UTC m=+18.160780509" watchObservedRunningTime="2025-10-27 22:39:34.061303917 +0000 UTC m=+18.160798598"
	Oct 27 22:39:34 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:34.071140    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bvr8f" podStartSLOduration=12.071118396 podStartE2EDuration="12.071118396s" podCreationTimestamp="2025-10-27 22:39:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:34.070991507 +0000 UTC m=+18.170486211" watchObservedRunningTime="2025-10-27 22:39:34.071118396 +0000 UTC m=+18.170613076"
	Oct 27 22:39:36 default-k8s-diff-port-927034 kubelet[1333]: I1027 22:39:36.416037    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nklng\" (UniqueName: \"kubernetes.io/projected/cbed7aab-1041-41f4-a104-e6676919cc97-kube-api-access-nklng\") pod \"busybox\" (UID: \"cbed7aab-1041-41f4-a104-e6676919cc97\") " pod="default/busybox"
	Oct 27 22:39:46 default-k8s-diff-port-927034 kubelet[1333]: E1027 22:39:46.399786    1333 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46290->127.0.0.1:33575: write tcp 127.0.0.1:46290->127.0.0.1:33575: write: broken pipe
	
	
	==> storage-provisioner [874485b15c9c3c1bb96c802fd8ef892d286d359bd666b27759328992709c0955] <==
	I1027 22:39:33.746167       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 22:39:33.754406       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 22:39:33.754480       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 22:39:33.756785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:33.761703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 22:39:33.761843       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 22:39:33.762019       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-927034_cfe379ad-2243-495b-bef6-bca0dccbeb5b!
	I1027 22:39:33.761969       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"42eefec6-3cd5-4f73-a150-25ae59ac9f08", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-927034_cfe379ad-2243-495b-bef6-bca0dccbeb5b became leader
	W1027 22:39:33.763824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:33.768443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 22:39:33.862878       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-927034_cfe379ad-2243-495b-bef6-bca0dccbeb5b!
	W1027 22:39:35.772155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:35.839776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:37.843225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:37.851334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:39.855316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:39.862366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:41.866614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:41.897033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:43.901401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:43.906447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:45.909808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:45.913901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:47.917311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:39:47.921191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-927034 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-290425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-290425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.644524ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-290425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-290425
helpers_test.go:243: (dbg) docker inspect newest-cni-290425:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c",
	        "Created": "2025-10-27T22:39:37.68348506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 744981,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:39:37.719695399Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/hostname",
	        "HostsPath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/hosts",
	        "LogPath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c-json.log",
	        "Name": "/newest-cni-290425",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-290425:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-290425",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c",
	                "LowerDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-290425",
	                "Source": "/var/lib/docker/volumes/newest-cni-290425/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-290425",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-290425",
	                "name.minikube.sigs.k8s.io": "newest-cni-290425",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebdfcb5b1dc6b9b4b6b9aa0f8fdb720fc2ee7e848b4bf8e7b29c41106c1e10ea",
	            "SandboxKey": "/var/run/docker/netns/ebdfcb5b1dc6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-290425": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:de:c6:e1:2c:aa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "882fc6de2a096110b95ca3e32de921ddc1344df620994b742636f3034ae19fad",
	                    "EndpointID": "4da39fce8c52d018494c6ce176ef0bb885158851f2937ebe433147a8b098c414",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-290425",
	                        "56a3b8496171"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-290425 -n newest-cni-290425
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-290425 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ addons  │ enable dashboard -p no-preload-188814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ image   │ old-k8s-version-908589 image list --format=json                                                                                                                                                                                               │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ pause   │ -p old-k8s-version-908589 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │                     │
	│ delete  │ -p old-k8s-version-908589                                                                                                                                                                                                                     │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p old-k8s-version-908589                                                                                                                                                                                                                     │ old-k8s-version-908589       │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ delete  │ -p disable-driver-mounts-617659                                                                                                                                                                                                               │ disable-driver-mounts-617659 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:38 UTC │
	│ start   │ -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:38 UTC │ 27 Oct 25 22:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-829976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ stop    │ -p embed-certs-829976 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ image   │ no-preload-188814 image list --format=json                                                                                                                                                                                                    │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ pause   │ -p no-preload-188814 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-829976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-695499                                                                                                                                                                                                                  │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ delete  │ -p no-preload-188814                                                                                                                                                                                                                          │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ delete  │ -p no-preload-188814                                                                                                                                                                                                                          │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p auto-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-927034 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-290425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:39:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:39:37.907657  745063 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:39:37.908199  745063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:37.908215  745063 out.go:374] Setting ErrFile to fd 2...
	I1027 22:39:37.908222  745063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:39:37.908680  745063 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:39:37.909824  745063 out.go:368] Setting JSON to false
	I1027 22:39:37.911273  745063 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8517,"bootTime":1761596261,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:39:37.911366  745063 start.go:143] virtualization: kvm guest
	I1027 22:39:37.912926  745063 out.go:179] * [auto-293335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:39:37.914353  745063 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:39:37.914372  745063 notify.go:221] Checking for updates...
	I1027 22:39:37.916603  745063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:39:37.917757  745063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:39:37.918663  745063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:39:37.923403  745063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:39:37.924424  745063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:39:37.926044  745063 config.go:182] Loaded profile config "default-k8s-diff-port-927034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:37.926206  745063 config.go:182] Loaded profile config "embed-certs-829976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:37.926345  745063 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:37.926450  745063 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:39:37.952649  745063 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:39:37.952779  745063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:39:38.033095  745063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-27 22:39:38.021788444 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:39:38.033263  745063 docker.go:318] overlay module found
	I1027 22:39:38.034869  745063 out.go:179] * Using the docker driver based on user configuration
	I1027 22:39:38.035915  745063 start.go:307] selected driver: docker
	I1027 22:39:38.035933  745063 start.go:928] validating driver "docker" against <nil>
	I1027 22:39:38.035982  745063 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:39:38.036762  745063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:39:38.099032  745063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-27 22:39:38.089319282 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:39:38.099282  745063 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:39:38.099562  745063 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:39:38.101200  745063 out.go:179] * Using Docker driver with root privileges
	I1027 22:39:38.102471  745063 cni.go:84] Creating CNI manager for ""
	I1027 22:39:38.102546  745063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:38.102561  745063 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:39:38.102656  745063 start.go:351] cluster config:
	{Name:auto-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1027 22:39:38.103968  745063 out.go:179] * Starting "auto-293335" primary control-plane node in "auto-293335" cluster
	I1027 22:39:38.105034  745063 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:39:38.106512  745063 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:39:38.107610  745063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:38.107652  745063 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:39:38.107668  745063 cache.go:59] Caching tarball of preloaded images
	I1027 22:39:38.107683  745063 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:39:38.107772  745063 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:39:38.107788  745063 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:39:38.107939  745063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/config.json ...
	I1027 22:39:38.107981  745063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/config.json: {Name:mk1ae734ed5e8f20b380b41f1567a6de126721bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:38.135839  745063 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:39:38.135862  745063 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:39:38.135881  745063 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:39:38.135911  745063 start.go:360] acquireMachinesLock for auto-293335: {Name:mk68871849e580837d3f745ed8c659efb677566e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:39:38.136038  745063 start.go:364] duration metric: took 98.223µs to acquireMachinesLock for "auto-293335"
	I1027 22:39:38.136067  745063 start.go:93] Provisioning new machine with config: &{Name:auto-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-293335 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:39:38.136162  745063 start.go:125] createHost starting for "" (driver="docker")
	I1027 22:39:33.933303  743829 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:39:33.933495  743829 start.go:159] libmachine.API.Create for "newest-cni-290425" (driver="docker")
	I1027 22:39:33.933530  743829 client.go:173] LocalClient.Create starting
	I1027 22:39:33.933602  743829 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 22:39:33.933642  743829 main.go:143] libmachine: Decoding PEM data...
	I1027 22:39:33.933670  743829 main.go:143] libmachine: Parsing certificate...
	I1027 22:39:33.933734  743829 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 22:39:33.933760  743829 main.go:143] libmachine: Decoding PEM data...
	I1027 22:39:33.933773  743829 main.go:143] libmachine: Parsing certificate...
	I1027 22:39:33.934140  743829 cli_runner.go:164] Run: docker network inspect newest-cni-290425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:39:33.949802  743829 cli_runner.go:211] docker network inspect newest-cni-290425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:39:33.949871  743829 network_create.go:284] running [docker network inspect newest-cni-290425] to gather additional debugging logs...
	I1027 22:39:33.949902  743829 cli_runner.go:164] Run: docker network inspect newest-cni-290425
	W1027 22:39:33.965893  743829 cli_runner.go:211] docker network inspect newest-cni-290425 returned with exit code 1
	I1027 22:39:33.965917  743829 network_create.go:287] error running [docker network inspect newest-cni-290425]: docker network inspect newest-cni-290425: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-290425 not found
	I1027 22:39:33.965931  743829 network_create.go:289] output of [docker network inspect newest-cni-290425]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-290425 not found
	
	** /stderr **
	I1027 22:39:33.966038  743829 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:39:33.982177  743829 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
	I1027 22:39:33.983225  743829 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2deffb37428 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:63:99:4f:c9:29} reservation:<nil>}
	I1027 22:39:33.983747  743829 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8aa1ad217c0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:19:7b:f4:de:20} reservation:<nil>}
	I1027 22:39:33.984872  743829 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e1e840}
	I1027 22:39:33.984899  743829 network_create.go:124] attempt to create docker network newest-cni-290425 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 22:39:33.984958  743829 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-290425 newest-cni-290425
	I1027 22:39:34.045715  743829 network_create.go:108] docker network newest-cni-290425 192.168.76.0/24 created
	I1027 22:39:34.045770  743829 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-290425" container
	I1027 22:39:34.045851  743829 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:39:34.066068  743829 cli_runner.go:164] Run: docker volume create newest-cni-290425 --label name.minikube.sigs.k8s.io=newest-cni-290425 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:39:34.086990  743829 oci.go:103] Successfully created a docker volume newest-cni-290425
	I1027 22:39:34.087070  743829 cli_runner.go:164] Run: docker run --rm --name newest-cni-290425-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-290425 --entrypoint /usr/bin/test -v newest-cni-290425:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:39:34.459469  743829 oci.go:107] Successfully prepared a docker volume newest-cni-290425
	I1027 22:39:34.459516  743829 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:34.459541  743829 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:39:34.459621  743829 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-290425:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:39:37.592976  743829 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-290425:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.133296316s)
	I1027 22:39:37.593010  743829 kic.go:203] duration metric: took 3.133464845s to extract preloaded images to volume ...
	W1027 22:39:37.593110  743829 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 22:39:37.593143  743829 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 22:39:37.593189  743829 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:39:37.665112  743829 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-290425 --name newest-cni-290425 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-290425 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-290425 --network newest-cni-290425 --ip 192.168.76.2 --volume newest-cni-290425:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:39:37.964421  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Running}}
	I1027 22:39:37.989902  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:39:38.013162  743829 cli_runner.go:164] Run: docker exec newest-cni-290425 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:39:38.067961  743829 oci.go:144] the created container "newest-cni-290425" has a running status.
	I1027 22:39:38.068009  743829 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa...
	I1027 22:39:38.328842  743829 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:39:38.368128  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:39:38.395079  743829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:39:38.395104  743829 kic_runner.go:114] Args: [docker exec --privileged newest-cni-290425 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:39:38.446852  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:39:38.479364  743829 machine.go:94] provisionDockerMachine start ...
	I1027 22:39:38.479474  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:38.498217  743829 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:38.498583  743829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1027 22:39:38.498605  743829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:39:38.650131  743829 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:39:38.650178  743829 ubuntu.go:182] provisioning hostname "newest-cni-290425"
	I1027 22:39:38.650251  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:38.670632  743829 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:38.670872  743829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1027 22:39:38.670892  743829 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-290425 && echo "newest-cni-290425" | sudo tee /etc/hostname
	I1027 22:39:35.838269  741885 provision.go:177] copyRemoteCerts
	I1027 22:39:35.838344  741885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:39:35.838418  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:35.857553  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:35.959276  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:39:35.976202  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:39:35.993380  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:39:36.017798  741885 provision.go:87] duration metric: took 1.159127378s to configureAuth
	I1027 22:39:36.017829  741885 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:39:36.018047  741885 config.go:182] Loaded profile config "embed-certs-829976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:36.018192  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:36.038801  741885 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:36.039083  741885 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1027 22:39:36.039100  741885 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:39:37.655975  741885 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:39:37.656017  741885 machine.go:97] duration metric: took 6.322399097s to provisionDockerMachine
	I1027 22:39:37.656033  741885 start.go:293] postStartSetup for "embed-certs-829976" (driver="docker")
	I1027 22:39:37.656049  741885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:39:37.656116  741885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:39:37.656181  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:37.677294  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:37.782059  741885 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:39:37.786201  741885 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:39:37.786226  741885 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:39:37.786259  741885 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:39:37.786302  741885 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:39:37.786446  741885 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:39:37.786584  741885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:39:37.797153  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:37.823183  741885 start.go:296] duration metric: took 167.131364ms for postStartSetup
	I1027 22:39:37.823258  741885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:39:37.823307  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:37.846970  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:37.947336  741885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:39:37.953047  741885 fix.go:57] duration metric: took 6.982605213s for fixHost
	I1027 22:39:37.953069  741885 start.go:83] releasing machines lock for "embed-certs-829976", held for 6.982652453s
	I1027 22:39:37.953120  741885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-829976
	I1027 22:39:37.973104  741885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:39:37.973153  741885 ssh_runner.go:195] Run: cat /version.json
	I1027 22:39:37.973361  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:37.973671  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:38.001027  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:38.002000  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:38.107298  741885 ssh_runner.go:195] Run: systemctl --version
	I1027 22:39:38.176625  741885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:39:38.223016  741885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:39:38.231583  741885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:39:38.231655  741885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:39:38.242587  741885 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:39:38.242614  741885 start.go:496] detecting cgroup driver to use...
	I1027 22:39:38.242645  741885 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:39:38.242690  741885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:39:38.258575  741885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:39:38.272695  741885 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:39:38.272747  741885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:39:38.295043  741885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:39:38.320744  741885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:39:38.435997  741885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:39:38.542975  741885 docker.go:234] disabling docker service ...
	I1027 22:39:38.543042  741885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:39:38.561360  741885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:39:38.576038  741885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:39:38.681167  741885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:39:38.784815  741885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:39:38.801099  741885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:39:38.817417  741885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:39:38.817478  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.828967  741885 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:39:38.829077  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.841914  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.855381  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.866643  741885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:39:38.876788  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.887193  741885 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.897906  741885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:38.909124  741885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:39:38.917811  741885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:39:38.926613  741885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:39.028599  741885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:39:39.207105  741885 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:39:39.207189  741885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:39:39.211509  741885 start.go:564] Will wait 60s for crictl version
	I1027 22:39:39.211579  741885 ssh_runner.go:195] Run: which crictl
	I1027 22:39:39.215967  741885 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:39:39.243580  741885 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:39:39.243665  741885 ssh_runner.go:195] Run: crio --version
	I1027 22:39:39.274022  741885 ssh_runner.go:195] Run: crio --version
	I1027 22:39:39.311072  741885 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:39:39.312032  741885 cli_runner.go:164] Run: docker network inspect embed-certs-829976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:39:39.329677  741885 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 22:39:39.334133  741885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:39.345082  741885 kubeadm.go:884] updating cluster {Name:embed-certs-829976 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:39:39.345249  741885 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:39.345317  741885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:39.384843  741885 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:39.384868  741885 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:39:39.384924  741885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:39.416336  741885 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:39.416359  741885 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:39:39.416371  741885 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 22:39:39.416491  741885 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-829976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:39:39.416567  741885 ssh_runner.go:195] Run: crio config
	I1027 22:39:39.469798  741885 cni.go:84] Creating CNI manager for ""
	I1027 22:39:39.469818  741885 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:39.469844  741885 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:39:39.469866  741885 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-829976 NodeName:embed-certs-829976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:39:39.470024  741885 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-829976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:39:39.470080  741885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:39:39.478204  741885 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:39:39.478257  741885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:39:39.486464  741885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 22:39:39.499120  741885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:39:39.512730  741885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 22:39:39.527236  741885 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:39:39.531231  741885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:39.541167  741885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:39.640644  741885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:39:39.667616  741885 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976 for IP: 192.168.85.2
	I1027 22:39:39.667636  741885 certs.go:195] generating shared ca certs ...
	I1027 22:39:39.667656  741885 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:39.667815  741885 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:39:39.667877  741885 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:39:39.667893  741885 certs.go:257] generating profile certs ...
	I1027 22:39:39.668037  741885 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/client.key
	I1027 22:39:39.668112  741885 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key.a2d2d0b7
	I1027 22:39:39.668178  741885 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key
	I1027 22:39:39.668325  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:39:39.668368  741885 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:39:39.668381  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:39:39.668413  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:39:39.668443  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:39:39.668478  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:39:39.668530  741885 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:39.669365  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:39:39.688561  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:39:39.710071  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:39:39.731217  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:39:39.755360  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 22:39:39.777514  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:39:39.796584  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:39:39.814889  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/embed-certs-829976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:39:39.833080  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:39:39.853822  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:39:39.874289  741885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:39:39.892937  741885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:39:39.907370  741885 ssh_runner.go:195] Run: openssl version
	I1027 22:39:39.914622  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:39:39.923799  741885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:39.928421  741885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:39.928483  741885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:39.971736  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:39:39.981986  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:39:39.991321  741885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:39:39.995642  741885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:39:39.995707  741885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:39:40.035737  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:39:40.044485  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:39:40.054098  741885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:39:40.058621  741885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:39:40.058685  741885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:39:40.104829  741885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:39:40.116102  741885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:39:40.120531  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:39:40.159593  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:39:40.207550  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:39:40.254153  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:39:40.329849  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:39:40.388674  741885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:39:40.428586  741885 kubeadm.go:401] StartCluster: {Name:embed-certs-829976 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-829976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:39:40.428769  741885 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:39:40.428860  741885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:39:40.467562  741885 cri.go:89] found id: "2f44a2722d5ccd7616df1090c6bb0dbee4aa51ec06009ab3a0c5b8d4976586ea"
	I1027 22:39:40.467697  741885 cri.go:89] found id: "45a7ab4d457895149bd74409ca1cf2067d30d698e93850bc8e3ded4ce106bbab"
	I1027 22:39:40.467708  741885 cri.go:89] found id: "9ebb5d429db0f5d2cfac0c88b414dd785a0b2d57b9fcfeb926197b670710530b"
	I1027 22:39:40.467721  741885 cri.go:89] found id: "e617c18783204a4f1e575bdec7825512002bad31cb3b04208481ca9f4c563564"
	I1027 22:39:40.467726  741885 cri.go:89] found id: ""
	I1027 22:39:40.467789  741885 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:39:40.485174  741885 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:39:40Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:39:40.485268  741885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:39:40.496454  741885 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:39:40.496479  741885 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:39:40.496535  741885 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:39:40.506544  741885 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:39:40.507296  741885 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-829976" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:39:40.507626  741885 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-482142/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-829976" cluster setting kubeconfig missing "embed-certs-829976" context setting]
	I1027 22:39:40.508418  741885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:40.510378  741885 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:39:40.520956  741885 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 22:39:40.521003  741885 kubeadm.go:602] duration metric: took 24.515684ms to restartPrimaryControlPlane
	I1027 22:39:40.521017  741885 kubeadm.go:403] duration metric: took 92.448931ms to StartCluster
	I1027 22:39:40.521041  741885 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:40.521138  741885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:39:40.523005  741885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:40.523364  741885 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:39:40.523486  741885 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:39:40.523576  741885 config.go:182] Loaded profile config "embed-certs-829976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:40.523620  741885 addons.go:69] Setting dashboard=true in profile "embed-certs-829976"
	I1027 22:39:40.523620  741885 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-829976"
	I1027 22:39:40.523636  741885 addons.go:238] Setting addon dashboard=true in "embed-certs-829976"
	W1027 22:39:40.523646  741885 addons.go:247] addon dashboard should already be in state true
	I1027 22:39:40.523649  741885 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-829976"
	W1027 22:39:40.523658  741885 addons.go:247] addon storage-provisioner should already be in state true
	I1027 22:39:40.523675  741885 host.go:66] Checking if "embed-certs-829976" exists ...
	I1027 22:39:40.523678  741885 addons.go:69] Setting default-storageclass=true in profile "embed-certs-829976"
	I1027 22:39:40.523702  741885 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-829976"
	I1027 22:39:40.523709  741885 host.go:66] Checking if "embed-certs-829976" exists ...
	I1027 22:39:40.524246  741885 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:39:40.524296  741885 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:39:40.524690  741885 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:39:40.526828  741885 out.go:179] * Verifying Kubernetes components...
	I1027 22:39:40.528001  741885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:40.555044  741885 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:39:40.556196  741885 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:39:40.556281  741885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:39:40.556427  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:40.558748  741885 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 22:39:40.560001  741885 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 22:39:40.561077  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 22:39:40.561101  741885 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 22:39:40.561172  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:40.573106  741885 addons.go:238] Setting addon default-storageclass=true in "embed-certs-829976"
	W1027 22:39:40.573137  741885 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:39:40.573168  741885 host.go:66] Checking if "embed-certs-829976" exists ...
	I1027 22:39:40.573657  741885 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:39:40.594995  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:40.603541  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:40.615043  741885 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:39:40.615169  741885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:39:40.615293  741885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:39:40.643505  741885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:39:40.717559  741885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:39:38.141349  745063 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:39:38.141578  745063 start.go:159] libmachine.API.Create for "auto-293335" (driver="docker")
	I1027 22:39:38.141610  745063 client.go:173] LocalClient.Create starting
	I1027 22:39:38.141676  745063 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 22:39:38.141710  745063 main.go:143] libmachine: Decoding PEM data...
	I1027 22:39:38.141736  745063 main.go:143] libmachine: Parsing certificate...
	I1027 22:39:38.141811  745063 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 22:39:38.141842  745063 main.go:143] libmachine: Decoding PEM data...
	I1027 22:39:38.141857  745063 main.go:143] libmachine: Parsing certificate...
	I1027 22:39:38.142245  745063 cli_runner.go:164] Run: docker network inspect auto-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:39:38.160771  745063 cli_runner.go:211] docker network inspect auto-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:39:38.160847  745063 network_create.go:284] running [docker network inspect auto-293335] to gather additional debugging logs...
	I1027 22:39:38.160869  745063 cli_runner.go:164] Run: docker network inspect auto-293335
	W1027 22:39:38.178198  745063 cli_runner.go:211] docker network inspect auto-293335 returned with exit code 1
	I1027 22:39:38.178229  745063 network_create.go:287] error running [docker network inspect auto-293335]: docker network inspect auto-293335: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-293335 not found
	I1027 22:39:38.178243  745063 network_create.go:289] output of [docker network inspect auto-293335]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-293335 not found
	
	** /stderr **
	I1027 22:39:38.178353  745063 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:39:38.199445  745063 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
	I1027 22:39:38.200534  745063 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2deffb37428 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:63:99:4f:c9:29} reservation:<nil>}
	I1027 22:39:38.201127  745063 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8aa1ad217c0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:19:7b:f4:de:20} reservation:<nil>}
	I1027 22:39:38.202096  745063 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-882fc6de2a09 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:5e:7d:03:a2:c4} reservation:<nil>}
	I1027 22:39:38.202891  745063 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-19326983879b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:fd:92:c2:f9:aa} reservation:<nil>}
	I1027 22:39:38.204168  745063 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e68010}
	I1027 22:39:38.204196  745063 network_create.go:124] attempt to create docker network auto-293335 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1027 22:39:38.204250  745063 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-293335 auto-293335
	I1027 22:39:38.277836  745063 network_create.go:108] docker network auto-293335 192.168.94.0/24 created
	I1027 22:39:38.277870  745063 kic.go:121] calculated static IP "192.168.94.2" for the "auto-293335" container
	I1027 22:39:38.277985  745063 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:39:38.304523  745063 cli_runner.go:164] Run: docker volume create auto-293335 --label name.minikube.sigs.k8s.io=auto-293335 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:39:38.329794  745063 oci.go:103] Successfully created a docker volume auto-293335
	I1027 22:39:38.329903  745063 cli_runner.go:164] Run: docker run --rm --name auto-293335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-293335 --entrypoint /usr/bin/test -v auto-293335:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:39:38.785518  745063 oci.go:107] Successfully prepared a docker volume auto-293335
	I1027 22:39:38.785571  745063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:38.785600  745063 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:39:38.785671  745063 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-293335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:39:38.853899  743829 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:39:38.854040  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:38.876125  743829 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:38.876435  743829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1027 22:39:38.876467  743829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-290425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-290425/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-290425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:39:39.038446  743829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:39:39.038475  743829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:39:39.038499  743829 ubuntu.go:190] setting up certificates
	I1027 22:39:39.038512  743829 provision.go:84] configureAuth start
	I1027 22:39:39.038584  743829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:39:39.062122  743829 provision.go:143] copyHostCerts
	I1027 22:39:39.062191  743829 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:39:39.062206  743829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:39:39.062274  743829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:39:39.062390  743829 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:39:39.062405  743829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:39:39.062447  743829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:39:39.062543  743829 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:39:39.062556  743829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:39:39.062596  743829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:39:39.062676  743829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-290425 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-290425]
	I1027 22:39:39.443175  743829 provision.go:177] copyRemoteCerts
	I1027 22:39:39.443253  743829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:39:39.443303  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:39.463198  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:39:39.566832  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:39:39.593835  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:39:39.611167  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:39:39.629365  743829 provision.go:87] duration metric: took 590.834285ms to configureAuth
	I1027 22:39:39.629397  743829 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:39:39.629604  743829 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:39.629730  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:39.649412  743829 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:39.649701  743829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1027 22:39:39.649721  743829 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:39:39.927025  743829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:39:39.927055  743829 machine.go:97] duration metric: took 1.447661976s to provisionDockerMachine
	I1027 22:39:39.927068  743829 client.go:176] duration metric: took 5.993527223s to LocalClient.Create
	I1027 22:39:39.927092  743829 start.go:167] duration metric: took 5.99359595s to libmachine.API.Create "newest-cni-290425"
	I1027 22:39:39.927104  743829 start.go:293] postStartSetup for "newest-cni-290425" (driver="docker")
	I1027 22:39:39.927116  743829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:39:39.927182  743829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:39:39.927232  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:39.949573  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:39:40.054515  743829 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:39:40.058505  743829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:39:40.058541  743829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:39:40.058554  743829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:39:40.058614  743829 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:39:40.058714  743829 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:39:40.058835  743829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:39:40.067864  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:40.090763  743829 start.go:296] duration metric: took 163.640069ms for postStartSetup
	I1027 22:39:40.091158  743829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:39:40.112212  743829 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/config.json ...
	I1027 22:39:40.112569  743829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:39:40.112633  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:40.134615  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:39:40.237968  743829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:39:40.245464  743829 start.go:128] duration metric: took 6.313703068s to createHost
	I1027 22:39:40.245515  743829 start.go:83] releasing machines lock for "newest-cni-290425", held for 6.31386504s
	I1027 22:39:40.245615  743829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:39:40.269751  743829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:39:40.269792  743829 ssh_runner.go:195] Run: cat /version.json
	I1027 22:39:40.269838  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:40.270209  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:39:40.305189  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:39:40.306212  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:39:40.490711  743829 ssh_runner.go:195] Run: systemctl --version
	I1027 22:39:40.500153  743829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:39:40.556539  743829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:39:40.565175  743829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:39:40.565252  743829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:39:40.633796  743829 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:39:40.633822  743829 start.go:496] detecting cgroup driver to use...
	I1027 22:39:40.633862  743829 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:39:40.633922  743829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:39:40.662253  743829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:39:40.680485  743829 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:39:40.680555  743829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:39:40.705934  743829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:39:40.733851  743829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:39:40.880360  743829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:39:41.016251  743829 docker.go:234] disabling docker service ...
	I1027 22:39:41.016318  743829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:39:41.045061  743829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:39:41.062254  743829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:39:41.186133  743829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:39:41.287335  743829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:39:41.301710  743829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:39:41.320170  743829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:39:41.320246  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.333068  743829 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:39:41.333140  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.342773  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.352724  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.366213  743829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:39:41.378439  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.391815  743829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.411149  743829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:41.424575  743829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:39:41.433671  743829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:39:41.444172  743829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:41.566487  743829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:39:40.736422  741885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:39:40.737668  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 22:39:40.737690  741885 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 22:39:40.739330  741885 node_ready.go:35] waiting up to 6m0s for node "embed-certs-829976" to be "Ready" ...
	I1027 22:39:40.758148  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 22:39:40.758241  741885 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 22:39:40.784076  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 22:39:40.784116  741885 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 22:39:40.801369  741885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:39:40.816291  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 22:39:40.816318  741885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 22:39:40.835377  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 22:39:40.835407  741885 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 22:39:40.863565  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 22:39:40.863596  741885 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 22:39:40.883782  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 22:39:40.883817  741885 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 22:39:40.904058  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 22:39:40.904086  741885 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 22:39:40.931506  741885 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:39:40.931534  741885 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 22:39:40.953268  741885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:39:42.383129  741885 node_ready.go:49] node "embed-certs-829976" is "Ready"
	I1027 22:39:42.383171  741885 node_ready.go:38] duration metric: took 1.643810214s for node "embed-certs-829976" to be "Ready" ...
	I1027 22:39:42.383198  741885 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:39:42.383259  741885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:39:43.393739  741885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.657274347s)
	I1027 22:39:43.393766  741885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.592355735s)
	I1027 22:39:43.884265  741885 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.50098346s)
	I1027 22:39:43.884284  741885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.93092816s)
	I1027 22:39:43.884305  741885 api_server.go:72] duration metric: took 3.360898246s to wait for apiserver process to appear ...
	I1027 22:39:43.884314  741885 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:39:43.884337  741885 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 22:39:43.885657  741885 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-829976 addons enable metrics-server
	
	I1027 22:39:43.887429  741885 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1027 22:39:43.868160  743829 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.301623825s)
	I1027 22:39:43.868205  743829 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:39:43.868258  743829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:39:43.874444  743829 start.go:564] Will wait 60s for crictl version
	I1027 22:39:43.874521  743829 ssh_runner.go:195] Run: which crictl
	I1027 22:39:43.879487  743829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:39:43.920116  743829 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:39:43.920240  743829 ssh_runner.go:195] Run: crio --version
	I1027 22:39:43.958916  743829 ssh_runner.go:195] Run: crio --version
	I1027 22:39:43.996688  743829 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:39:43.997891  743829 cli_runner.go:164] Run: docker network inspect newest-cni-290425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:39:44.020082  743829 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 22:39:44.024301  743829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:44.102757  743829 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1027 22:39:43.888680  741885 addons.go:514] duration metric: took 3.365238396s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1027 22:39:43.890605  741885 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:39:43.890627  741885 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:39:44.385109  741885 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 22:39:44.391373  741885 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 22:39:44.392829  741885 api_server.go:141] control plane version: v1.34.1
	I1027 22:39:44.393268  741885 api_server.go:131] duration metric: took 508.94097ms to wait for apiserver health ...
	I1027 22:39:44.393301  741885 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:39:44.398759  741885 system_pods.go:59] 8 kube-system pods found
	I1027 22:39:44.398916  741885 system_pods.go:61] "coredns-66bc5c9577-msbj9" [eabc58bc-8437-422d-bed2-b0d37d4bb14b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:39:44.398934  741885 system_pods.go:61] "etcd-embed-certs-829976" [4c420d10-88b4-4e9b-8edc-73a2bcb14fe3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:39:44.398954  741885 system_pods.go:61] "kindnet-dtjql" [8e75d998-47cc-4e2c-b1d2-7b6069c821f8] Running
	I1027 22:39:44.398978  741885 system_pods.go:61] "kube-apiserver-embed-certs-829976" [dab60253-4b47-45bc-a7d0-21de852d913c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:39:44.398996  741885 system_pods.go:61] "kube-controller-manager-embed-certs-829976" [434b07e1-c7e4-41f9-a8de-5d24091f627c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:39:44.399003  741885 system_pods.go:61] "kube-proxy-gf725" [3751b38d-bae6-4ea8-9669-346eb3fd7457] Running
	I1027 22:39:44.399016  741885 system_pods.go:61] "kube-scheduler-embed-certs-829976" [479c9aa0-d1dd-416c-94fe-53a85d338715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:39:44.399023  741885 system_pods.go:61] "storage-provisioner" [fcbb9eb6-2144-438f-abf4-a4bd189f88f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:39:44.399033  741885 system_pods.go:74] duration metric: took 5.699715ms to wait for pod list to return data ...
	I1027 22:39:44.399052  741885 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:39:44.402736  741885 default_sa.go:45] found service account: "default"
	I1027 22:39:44.402819  741885 default_sa.go:55] duration metric: took 3.757403ms for default service account to be created ...
	I1027 22:39:44.402835  741885 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:39:44.405674  741885 system_pods.go:86] 8 kube-system pods found
	I1027 22:39:44.405750  741885 system_pods.go:89] "coredns-66bc5c9577-msbj9" [eabc58bc-8437-422d-bed2-b0d37d4bb14b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:39:44.405766  741885 system_pods.go:89] "etcd-embed-certs-829976" [4c420d10-88b4-4e9b-8edc-73a2bcb14fe3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:39:44.405775  741885 system_pods.go:89] "kindnet-dtjql" [8e75d998-47cc-4e2c-b1d2-7b6069c821f8] Running
	I1027 22:39:44.405787  741885 system_pods.go:89] "kube-apiserver-embed-certs-829976" [dab60253-4b47-45bc-a7d0-21de852d913c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:39:44.405818  741885 system_pods.go:89] "kube-controller-manager-embed-certs-829976" [434b07e1-c7e4-41f9-a8de-5d24091f627c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:39:44.405927  741885 system_pods.go:89] "kube-proxy-gf725" [3751b38d-bae6-4ea8-9669-346eb3fd7457] Running
	I1027 22:39:44.405980  741885 system_pods.go:89] "kube-scheduler-embed-certs-829976" [479c9aa0-d1dd-416c-94fe-53a85d338715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:39:44.405998  741885 system_pods.go:89] "storage-provisioner" [fcbb9eb6-2144-438f-abf4-a4bd189f88f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:39:44.406007  741885 system_pods.go:126] duration metric: took 3.165876ms to wait for k8s-apps to be running ...
	I1027 22:39:44.406016  741885 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:39:44.406075  741885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:39:44.423551  741885 system_svc.go:56] duration metric: took 17.525198ms WaitForService to wait for kubelet
	I1027 22:39:44.423621  741885 kubeadm.go:587] duration metric: took 3.900213782s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:39:44.423649  741885 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:39:44.426477  741885 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:39:44.426508  741885 node_conditions.go:123] node cpu capacity is 8
	I1027 22:39:44.426525  741885 node_conditions.go:105] duration metric: took 2.868803ms to run NodePressure ...
	I1027 22:39:44.426540  741885 start.go:242] waiting for startup goroutines ...
	I1027 22:39:44.426550  741885 start.go:247] waiting for cluster config update ...
	I1027 22:39:44.426571  741885 start.go:256] writing updated cluster config ...
	I1027 22:39:44.426887  741885 ssh_runner.go:195] Run: rm -f paused
	I1027 22:39:44.432668  741885 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:39:44.441454  741885 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-msbj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:39:44.104817  743829 kubeadm.go:884] updating cluster {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:39:44.105013  743829 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:44.105099  743829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:44.156373  743829 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:44.156403  743829 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:39:44.156472  743829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:44.185867  743829 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:44.185891  743829 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:39:44.185899  743829 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 22:39:44.186028  743829 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-290425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:39:44.186125  743829 ssh_runner.go:195] Run: crio config
	I1027 22:39:44.234853  743829 cni.go:84] Creating CNI manager for ""
	I1027 22:39:44.234876  743829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:44.234904  743829 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 22:39:44.234934  743829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-290425 NodeName:newest-cni-290425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:39:44.235129  743829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-290425"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:39:44.235214  743829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:39:44.243674  743829 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:39:44.243768  743829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:39:44.251841  743829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 22:39:44.268218  743829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:39:44.288231  743829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1027 22:39:44.305390  743829 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:39:44.310176  743829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:44.322997  743829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:44.436799  743829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:39:44.466259  743829 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425 for IP: 192.168.76.2
	I1027 22:39:44.466327  743829 certs.go:195] generating shared ca certs ...
	I1027 22:39:44.466374  743829 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:44.466566  743829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:39:44.466635  743829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:39:44.466646  743829 certs.go:257] generating profile certs ...
	I1027 22:39:44.466714  743829 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.key
	I1027 22:39:44.466727  743829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.crt with IP's: []
	I1027 22:39:44.658433  743829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.crt ...
	I1027 22:39:44.658466  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.crt: {Name:mk52bb5b0c9e51e109632c9ea2227777d91b7aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:44.658625  743829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.key ...
	I1027 22:39:44.658645  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.key: {Name:mkf30bcc1c690649895c4ff985af3da1c7fa30b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:44.658784  743829 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67
	I1027 22:39:44.658807  743829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt.46af5a67 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 22:39:44.880190  743829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt.46af5a67 ...
	I1027 22:39:44.880222  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt.46af5a67: {Name:mk685cc4ab6a3b8ac44496e69c7626f728be2214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:44.880392  743829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67 ...
	I1027 22:39:44.880410  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67: {Name:mkc70edbe60feebf38e5d81382cc70bec4258b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:44.880535  743829 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt.46af5a67 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt
	I1027 22:39:44.880651  743829 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key
	I1027 22:39:44.880741  743829 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key
	I1027 22:39:44.880766  743829 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt with IP's: []
	I1027 22:39:45.629017  743829 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt ...
	I1027 22:39:45.629044  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt: {Name:mk31788c60402132a2fbf20f2a07e83085ee1b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:45.629222  743829 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key ...
	I1027 22:39:45.629237  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key: {Name:mk627c14beb41c896c195dd19330f61236072a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:45.629422  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:39:45.629460  743829 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:39:45.629470  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:39:45.629494  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:39:45.629516  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:39:45.629536  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:39:45.629573  743829 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:45.630159  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:39:45.648938  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:39:45.667176  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:39:45.685148  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:39:45.709657  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 22:39:45.735490  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:39:45.754144  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:39:45.772826  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:39:45.790377  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:39:45.811457  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:39:45.829179  743829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:39:45.848049  743829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:39:45.861910  743829 ssh_runner.go:195] Run: openssl version
	I1027 22:39:45.869071  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:39:45.877902  743829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:39:45.881602  743829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:39:45.881653  743829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:39:45.921990  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:39:45.932750  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:39:45.944674  743829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:45.949325  743829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:45.949373  743829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:45.988788  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:39:45.999585  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:39:46.009637  743829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:39:46.013527  743829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:39:46.013601  743829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:39:46.050768  743829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:39:46.059139  743829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:39:46.062819  743829 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:39:46.062879  743829 kubeadm.go:401] StartCluster: {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:39:46.062982  743829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:39:46.063030  743829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:39:46.091173  743829 cri.go:89] found id: ""
	I1027 22:39:46.091284  743829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:39:46.099380  743829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:39:46.107380  743829 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:39:46.107438  743829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:39:46.115813  743829 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:39:46.115827  743829 kubeadm.go:158] found existing configuration files:
	
	I1027 22:39:46.115867  743829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:39:46.123572  743829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:39:46.123626  743829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:39:46.131366  743829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:39:46.138690  743829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:39:46.138742  743829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:39:46.145973  743829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:39:46.153392  743829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:39:46.153439  743829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:39:46.160171  743829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:39:46.167208  743829 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:39:46.167251  743829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:39:46.174088  743829 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:39:46.213498  743829 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:39:46.213552  743829 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:39:46.235328  743829 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:39:46.235414  743829 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:39:46.235463  743829 kubeadm.go:319] OS: Linux
	I1027 22:39:46.235535  743829 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:39:46.235657  743829 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:39:46.235742  743829 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:39:46.235823  743829 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:39:46.235897  743829 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:39:46.236016  743829 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:39:46.236085  743829 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:39:46.236150  743829 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:39:46.303608  743829 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:39:46.303703  743829 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:39:46.303781  743829 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:39:46.314237  743829 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:39:43.740505  745063 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-293335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.954767804s)
	I1027 22:39:43.740549  745063 kic.go:203] duration metric: took 4.954944829s to extract preloaded images to volume ...
	W1027 22:39:43.740641  745063 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 22:39:43.740675  745063 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 22:39:43.740716  745063 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:39:43.840567  745063 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-293335 --name auto-293335 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-293335 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-293335 --network auto-293335 --ip 192.168.94.2 --volume auto-293335:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:39:44.266931  745063 cli_runner.go:164] Run: docker container inspect auto-293335 --format={{.State.Running}}
	I1027 22:39:44.287207  745063 cli_runner.go:164] Run: docker container inspect auto-293335 --format={{.State.Status}}
	I1027 22:39:44.308387  745063 cli_runner.go:164] Run: docker exec auto-293335 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:39:44.358362  745063 oci.go:144] the created container "auto-293335" has a running status.
	I1027 22:39:44.358400  745063 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/auto-293335/id_rsa...
	I1027 22:39:44.810960  745063 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/auto-293335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:39:44.838902  745063 cli_runner.go:164] Run: docker container inspect auto-293335 --format={{.State.Status}}
	I1027 22:39:44.859654  745063 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:39:44.859678  745063 kic_runner.go:114] Args: [docker exec --privileged auto-293335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:39:44.906539  745063 cli_runner.go:164] Run: docker container inspect auto-293335 --format={{.State.Status}}
	I1027 22:39:44.923974  745063 machine.go:94] provisionDockerMachine start ...
	I1027 22:39:44.924063  745063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-293335
	I1027 22:39:44.940260  745063 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:44.940580  745063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1027 22:39:44.940606  745063 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:39:45.085480  745063 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-293335
	
	I1027 22:39:45.085510  745063 ubuntu.go:182] provisioning hostname "auto-293335"
	I1027 22:39:45.085567  745063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-293335
	I1027 22:39:45.103158  745063 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:45.103416  745063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1027 22:39:45.103435  745063 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-293335 && echo "auto-293335" | sudo tee /etc/hostname
	I1027 22:39:45.256979  745063 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-293335
	
	I1027 22:39:45.257082  745063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-293335
	I1027 22:39:45.274410  745063 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:45.274635  745063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1027 22:39:45.274678  745063 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-293335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-293335/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-293335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:39:45.415782  745063 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:39:45.415809  745063 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:39:45.415833  745063 ubuntu.go:190] setting up certificates
	I1027 22:39:45.415848  745063 provision.go:84] configureAuth start
	I1027 22:39:45.415912  745063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-293335
	I1027 22:39:45.434590  745063 provision.go:143] copyHostCerts
	I1027 22:39:45.434655  745063 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:39:45.434681  745063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:39:45.434762  745063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:39:45.434888  745063 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:39:45.434901  745063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:39:45.434972  745063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:39:45.435135  745063 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:39:45.435148  745063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:39:45.435200  745063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:39:45.435280  745063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.auto-293335 san=[127.0.0.1 192.168.94.2 auto-293335 localhost minikube]
	I1027 22:39:45.453776  745063 provision.go:177] copyRemoteCerts
	I1027 22:39:45.453843  745063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:39:45.453907  745063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-293335
	I1027 22:39:45.473413  745063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/auto-293335/id_rsa Username:docker}
	I1027 22:39:45.577015  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:39:45.600797  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1027 22:39:45.619308  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:39:45.637836  745063 provision.go:87] duration metric: took 221.971073ms to configureAuth
	I1027 22:39:45.637878  745063 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:39:45.638072  745063 config.go:182] Loaded profile config "auto-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:39:45.638209  745063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-293335
	I1027 22:39:45.656678  745063 main.go:143] libmachine: Using SSH client type: native
	I1027 22:39:45.656891  745063 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1027 22:39:45.656912  745063 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:39:45.935458  745063 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:39:45.935489  745063 machine.go:97] duration metric: took 1.011493836s to provisionDockerMachine
	I1027 22:39:45.935502  745063 client.go:176] duration metric: took 7.793882587s to LocalClient.Create
	I1027 22:39:45.935525  745063 start.go:167] duration metric: took 7.793952123s to libmachine.API.Create "auto-293335"
	I1027 22:39:45.935537  745063 start.go:293] postStartSetup for "auto-293335" (driver="docker")
	I1027 22:39:45.935549  745063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:39:45.935624  745063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:39:45.935671  745063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-293335
	I1027 22:39:45.956098  745063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/auto-293335/id_rsa Username:docker}
	I1027 22:39:46.059679  745063 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:39:46.063297  745063 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:39:46.063327  745063 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:39:46.063339  745063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:39:46.063397  745063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:39:46.063508  745063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:39:46.063624  745063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:39:46.071086  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:46.092000  745063 start.go:296] duration metric: took 156.447208ms for postStartSetup
	I1027 22:39:46.092515  745063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-293335
	I1027 22:39:46.111298  745063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/config.json ...
	I1027 22:39:46.111527  745063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:39:46.111566  745063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-293335
	I1027 22:39:46.129000  745063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/auto-293335/id_rsa Username:docker}
	I1027 22:39:46.227004  745063 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:39:46.231993  745063 start.go:128] duration metric: took 8.095810711s to createHost
	I1027 22:39:46.232020  745063 start.go:83] releasing machines lock for "auto-293335", held for 8.095966485s
	I1027 22:39:46.232093  745063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-293335
	I1027 22:39:46.250310  745063 ssh_runner.go:195] Run: cat /version.json
	I1027 22:39:46.250364  745063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-293335
	I1027 22:39:46.250422  745063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:39:46.250496  745063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-293335
	I1027 22:39:46.268517  745063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/auto-293335/id_rsa Username:docker}
	I1027 22:39:46.269453  745063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/auto-293335/id_rsa Username:docker}
	I1027 22:39:46.445177  745063 ssh_runner.go:195] Run: systemctl --version
	I1027 22:39:46.452590  745063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:39:46.490534  745063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:39:46.495521  745063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:39:46.495586  745063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:39:46.523938  745063 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:39:46.523983  745063 start.go:496] detecting cgroup driver to use...
	I1027 22:39:46.524017  745063 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:39:46.524071  745063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:39:46.540866  745063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:39:46.553155  745063 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:39:46.553205  745063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:39:46.569570  745063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:39:46.585775  745063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:39:46.682518  745063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:39:46.795686  745063 docker.go:234] disabling docker service ...
	I1027 22:39:46.795764  745063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:39:46.820328  745063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:39:46.835361  745063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:39:46.935414  745063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:39:47.021831  745063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:39:47.036053  745063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:39:47.053766  745063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:39:47.053823  745063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:47.068064  745063 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:39:47.068123  745063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:47.079550  745063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:47.091054  745063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:47.101539  745063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:39:47.111578  745063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:47.121010  745063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:47.135281  745063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:39:47.144433  745063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:39:47.151814  745063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:39:47.159113  745063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:47.242996  745063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:39:47.377339  745063 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:39:47.377401  745063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:39:47.381486  745063 start.go:564] Will wait 60s for crictl version
	I1027 22:39:47.381553  745063 ssh_runner.go:195] Run: which crictl
	I1027 22:39:47.385279  745063 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:39:47.410297  745063 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:39:47.410393  745063 ssh_runner.go:195] Run: crio --version
	I1027 22:39:47.443362  745063 ssh_runner.go:195] Run: crio --version
	I1027 22:39:47.477820  745063 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:39:47.478901  745063 cli_runner.go:164] Run: docker network inspect auto-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:39:47.497791  745063 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 22:39:47.502514  745063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:47.512765  745063 kubeadm.go:884] updating cluster {Name:auto-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:39:47.512889  745063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:39:47.512954  745063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:47.548029  745063 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:47.548052  745063 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:39:47.548102  745063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:39:47.575536  745063 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:39:47.575558  745063 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:39:47.575567  745063 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 22:39:47.575679  745063 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-293335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:39:47.575753  745063 ssh_runner.go:195] Run: crio config
	I1027 22:39:47.624464  745063 cni.go:84] Creating CNI manager for ""
	I1027 22:39:47.624483  745063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:47.624497  745063 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:39:47.624529  745063 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-293335 NodeName:auto-293335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:39:47.624736  745063 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-293335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:39:47.624872  745063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:39:47.633908  745063 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:39:47.634052  745063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:39:47.642118  745063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1027 22:39:47.655522  745063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:39:47.671713  745063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1027 22:39:47.685025  745063 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:39:47.689514  745063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:39:47.700745  745063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:39:47.796182  745063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:39:47.821377  745063 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335 for IP: 192.168.94.2
	I1027 22:39:47.821400  745063 certs.go:195] generating shared ca certs ...
	I1027 22:39:47.821422  745063 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:47.821684  745063 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:39:47.821746  745063 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:39:47.821763  745063 certs.go:257] generating profile certs ...
	I1027 22:39:47.821834  745063 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/client.key
	I1027 22:39:47.821856  745063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/client.crt with IP's: []
	I1027 22:39:46.317979  743829 out.go:252]   - Generating certificates and keys ...
	I1027 22:39:46.318079  743829 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:39:46.318172  743829 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:39:46.720338  743829 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:39:46.790759  743829 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:39:46.912534  743829 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:39:47.219662  743829 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:39:47.290077  743829 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:39:47.290273  743829 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-290425] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 22:39:47.550677  743829 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:39:47.550889  743829 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-290425] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 22:39:48.196502  743829 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:39:48.173570  745063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/client.crt ...
	I1027 22:39:48.173604  745063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/client.crt: {Name:mk75ec43e28c1c2ac0a036001faf2e527e0cad34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:48.173819  745063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/client.key ...
	I1027 22:39:48.173840  745063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/client.key: {Name:mkef49a043cef850a06b4acdfc81bdeb1cd12110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:48.173968  745063 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.key.e7915827
	I1027 22:39:48.173987  745063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.crt.e7915827 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1027 22:39:48.210661  745063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.crt.e7915827 ...
	I1027 22:39:48.210692  745063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.crt.e7915827: {Name:mk2d6271fc3260517bb138b492a0d4b1a1eb3367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:48.210892  745063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.key.e7915827 ...
	I1027 22:39:48.210919  745063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.key.e7915827: {Name:mk738fc2a4593ceb90a656bac95b386cef20539f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:48.211063  745063 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.crt.e7915827 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.crt
	I1027 22:39:48.211162  745063 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.key.e7915827 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.key
	I1027 22:39:48.211242  745063 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/proxy-client.key
	I1027 22:39:48.211259  745063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/proxy-client.crt with IP's: []
	I1027 22:39:48.371155  745063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/proxy-client.crt ...
	I1027 22:39:48.371192  745063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/proxy-client.crt: {Name:mke2206cffc7d56b8344c164a1072ac86a37593d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:48.371404  745063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/proxy-client.key ...
	I1027 22:39:48.371432  745063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/proxy-client.key: {Name:mkf518ee06cb4678bc2cfdac6889487f7d3ba53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:39:48.371694  745063 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:39:48.371746  745063 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:39:48.371763  745063 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:39:48.371797  745063 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:39:48.371830  745063 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:39:48.371859  745063 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:39:48.371921  745063 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:39:48.372770  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:39:48.398328  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:39:48.427545  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:39:48.452983  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:39:48.479157  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1027 22:39:48.505329  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:39:48.530319  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:39:48.554570  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/auto-293335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:39:48.578333  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:39:48.606570  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:39:48.636218  745063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:39:48.668001  745063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:39:48.687490  745063 ssh_runner.go:195] Run: openssl version
	I1027 22:39:48.696119  745063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:39:48.709386  745063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:39:48.715020  745063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:39:48.715090  745063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:39:48.778456  745063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:39:48.791591  745063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:39:48.804758  745063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:48.810339  745063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:48.810412  745063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:39:48.866173  745063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:39:48.878800  745063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:39:48.890677  745063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:39:48.896349  745063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:39:48.896418  745063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:39:48.956984  745063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:39:48.970010  745063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:39:48.975032  745063 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:39:48.975107  745063 kubeadm.go:401] StartCluster: {Name:auto-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:39:48.975227  745063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:39:48.975281  745063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:39:49.015449  745063 cri.go:89] found id: ""
	I1027 22:39:49.015520  745063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:39:49.026657  745063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:39:49.036925  745063 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:39:49.036995  745063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:39:49.045400  745063 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:39:49.045427  745063 kubeadm.go:158] found existing configuration files:
	
	I1027 22:39:49.045477  745063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:39:49.055304  745063 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:39:49.055483  745063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:39:49.064291  745063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:39:49.074041  745063 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:39:49.074103  745063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:39:49.083717  745063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:39:49.094699  745063 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:39:49.094765  745063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:39:49.105801  745063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:39:49.116563  745063 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:39:49.116623  745063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:39:49.127438  745063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:39:49.191138  745063 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:39:49.191218  745063 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:39:49.226713  745063 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:39:49.227365  745063 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:39:49.227436  745063 kubeadm.go:319] OS: Linux
	I1027 22:39:49.227501  745063 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:39:49.227565  745063 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:39:49.227624  745063 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:39:49.227683  745063 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:39:49.227747  745063 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:39:49.227818  745063 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:39:49.227883  745063 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:39:49.228036  745063 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:39:49.310110  745063 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:39:49.310394  745063 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:39:49.310518  745063 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:39:49.318424  745063 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:39:48.844765  743829 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:39:49.223279  743829 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:39:49.223383  743829 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:39:49.626637  743829 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:39:49.948358  743829 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:39:50.047931  743829 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:39:50.241922  743829 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:39:50.416297  743829 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:39:50.417867  743829 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:39:50.425843  743829 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1027 22:39:46.446383  741885 pod_ready.go:104] pod "coredns-66bc5c9577-msbj9" is not "Ready", error: <nil>
	W1027 22:39:48.448513  741885 pod_ready.go:104] pod "coredns-66bc5c9577-msbj9" is not "Ready", error: <nil>
	W1027 22:39:50.449320  741885 pod_ready.go:104] pod "coredns-66bc5c9577-msbj9" is not "Ready", error: <nil>
	I1027 22:39:49.320373  745063 out.go:252]   - Generating certificates and keys ...
	I1027 22:39:49.320476  745063 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:39:49.320574  745063 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:39:50.063490  745063 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:39:50.756432  745063 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:39:51.036300  745063 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:39:52.050880  745063 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:39:52.346456  745063 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:39:52.346577  745063 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-293335 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 22:39:50.428285  743829 out.go:252]   - Booting up control plane ...
	I1027 22:39:50.428403  743829 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:39:50.428491  743829 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:39:50.428566  743829 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:39:50.452596  743829 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:39:50.452875  743829 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:39:50.463038  743829 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:39:50.463269  743829 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:39:50.463312  743829 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:39:50.599394  743829 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:39:50.599570  743829 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:39:51.600913  743829 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00141409s
	I1027 22:39:51.605811  743829 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:39:51.606052  743829 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 22:39:51.606257  743829 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:39:51.606368  743829 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:39:53.117533  745063 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:39:53.117970  745063 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-293335 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 22:39:53.484731  745063 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:39:53.659153  745063 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:39:54.183581  745063 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:39:54.183715  745063 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:39:54.431047  745063 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:39:54.756546  745063 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:39:55.084335  745063 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:39:55.286937  745063 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:39:55.361765  745063 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:39:55.362273  745063 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:39:55.366278  745063 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1027 22:39:52.950753  741885 pod_ready.go:104] pod "coredns-66bc5c9577-msbj9" is not "Ready", error: <nil>
	W1027 22:39:55.447130  741885 pod_ready.go:104] pod "coredns-66bc5c9577-msbj9" is not "Ready", error: <nil>
	I1027 22:39:53.758921  743829 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.152882683s
	I1027 22:39:54.516436  743829 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.910710785s
	I1027 22:39:56.108407  743829 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502817211s
	I1027 22:39:56.122667  743829 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:39:56.134117  743829 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:39:56.143602  743829 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:39:56.143902  743829 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-290425 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:39:56.151872  743829 kubeadm.go:319] [bootstrap-token] Using token: vmy9di.1cub2yjo32im3kkk
	I1027 22:39:56.153316  743829 out.go:252]   - Configuring RBAC rules ...
	I1027 22:39:56.153502  743829 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:39:56.156786  743829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:39:56.162209  743829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:39:56.164653  743829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:39:56.167013  743829 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:39:56.170030  743829 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:39:56.515354  743829 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:39:56.934620  743829 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:39:57.515083  743829 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:39:57.516038  743829 kubeadm.go:319] 
	I1027 22:39:57.516164  743829 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:39:57.516188  743829 kubeadm.go:319] 
	I1027 22:39:57.516279  743829 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:39:57.516288  743829 kubeadm.go:319] 
	I1027 22:39:57.516325  743829 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:39:57.516408  743829 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:39:57.516480  743829 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:39:57.516489  743829 kubeadm.go:319] 
	I1027 22:39:57.516566  743829 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:39:57.516574  743829 kubeadm.go:319] 
	I1027 22:39:57.516652  743829 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:39:57.516663  743829 kubeadm.go:319] 
	I1027 22:39:57.516742  743829 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:39:57.516860  743829 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:39:57.516991  743829 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:39:57.517001  743829 kubeadm.go:319] 
	I1027 22:39:57.517123  743829 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:39:57.517245  743829 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:39:57.517266  743829 kubeadm.go:319] 
	I1027 22:39:57.517378  743829 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vmy9di.1cub2yjo32im3kkk \
	I1027 22:39:57.517544  743829 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d \
	I1027 22:39:57.517578  743829 kubeadm.go:319] 	--control-plane 
	I1027 22:39:57.517586  743829 kubeadm.go:319] 
	I1027 22:39:57.517718  743829 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:39:57.517728  743829 kubeadm.go:319] 
	I1027 22:39:57.517842  743829 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vmy9di.1cub2yjo32im3kkk \
	I1027 22:39:57.517984  743829 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d 
	I1027 22:39:57.520702  743829 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 22:39:57.520814  743829 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:39:57.520842  743829 cni.go:84] Creating CNI manager for ""
	I1027 22:39:57.520852  743829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:39:57.522982  743829 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:39:55.367989  745063 out.go:252]   - Booting up control plane ...
	I1027 22:39:55.368106  745063 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:39:55.368370  745063 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:39:55.369690  745063 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:39:55.399417  745063 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:39:55.399545  745063 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:39:55.406189  745063 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:39:55.406465  745063 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:39:55.406538  745063 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:39:55.524252  745063 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:39:55.524419  745063 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:39:56.525131  745063 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001039157s
	I1027 22:39:56.529826  745063 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:39:56.530070  745063 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1027 22:39:56.530235  745063 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:39:56.530675  745063 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:39:57.884351  745063 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.354383608s
	I1027 22:39:57.524044  743829 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:39:57.528612  743829 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 22:39:57.528631  743829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:39:57.543792  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:39:57.811404  743829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:39:57.811468  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:39:57.811474  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-290425 minikube.k8s.io/updated_at=2025_10_27T22_39_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=newest-cni-290425 minikube.k8s.io/primary=true
	I1027 22:39:57.826396  743829 ops.go:34] apiserver oom_adj: -16
	I1027 22:39:57.930252  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:39:58.430386  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:39:58.665568  745063 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.135997398s
	I1027 22:40:00.531398  745063 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001781694s
	I1027 22:40:00.542026  745063 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:40:00.551624  745063 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:40:00.559580  745063 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:40:00.559844  745063 kubeadm.go:319] [mark-control-plane] Marking the node auto-293335 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:40:00.568357  745063 kubeadm.go:319] [bootstrap-token] Using token: kwkkt3.nstsis2boahtw20w
	W1027 22:39:57.948507  741885 pod_ready.go:104] pod "coredns-66bc5c9577-msbj9" is not "Ready", error: <nil>
	W1027 22:40:00.447534  741885 pod_ready.go:104] pod "coredns-66bc5c9577-msbj9" is not "Ready", error: <nil>
	I1027 22:40:00.569540  745063 out.go:252]   - Configuring RBAC rules ...
	I1027 22:40:00.569696  745063 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:40:00.572360  745063 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:40:00.577422  745063 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:40:00.579938  745063 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:40:00.582313  745063 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:40:00.585183  745063 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:40:00.937351  745063 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:40:01.352088  745063 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:40:01.937813  745063 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:40:01.939794  745063 kubeadm.go:319] 
	I1027 22:40:01.939890  745063 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:40:01.939897  745063 kubeadm.go:319] 
	I1027 22:40:01.940003  745063 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:40:01.940014  745063 kubeadm.go:319] 
	I1027 22:40:01.940045  745063 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:40:01.940122  745063 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:40:01.940252  745063 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:40:01.940280  745063 kubeadm.go:319] 
	I1027 22:40:01.940357  745063 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:40:01.940389  745063 kubeadm.go:319] 
	I1027 22:40:01.940468  745063 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:40:01.940480  745063 kubeadm.go:319] 
	I1027 22:40:01.940564  745063 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:40:01.940677  745063 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:40:01.940768  745063 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:40:01.940778  745063 kubeadm.go:319] 
	I1027 22:40:01.940912  745063 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:40:01.941032  745063 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:40:01.941043  745063 kubeadm.go:319] 
	I1027 22:40:01.941152  745063 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kwkkt3.nstsis2boahtw20w \
	I1027 22:40:01.941262  745063 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d \
	I1027 22:40:01.941288  745063 kubeadm.go:319] 	--control-plane 
	I1027 22:40:01.941298  745063 kubeadm.go:319] 
	I1027 22:40:01.941392  745063 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:40:01.941404  745063 kubeadm.go:319] 
	I1027 22:40:01.941534  745063 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kwkkt3.nstsis2boahtw20w \
	I1027 22:40:01.941688  745063 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d 
	I1027 22:40:01.945124  745063 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 22:40:01.945330  745063 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:40:01.945368  745063 cni.go:84] Creating CNI manager for ""
	I1027 22:40:01.945382  745063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:40:01.947426  745063 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:39:58.930464  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:39:59.430301  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:39:59.931180  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:40:00.431264  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:40:00.930325  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:40:01.431879  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:40:01.931064  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:40:02.430339  743829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:40:02.509979  743829 kubeadm.go:1114] duration metric: took 4.698549875s to wait for elevateKubeSystemPrivileges
	I1027 22:40:02.510027  743829 kubeadm.go:403] duration metric: took 16.447153041s to StartCluster
	I1027 22:40:02.510052  743829 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:02.510132  743829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:02.512350  743829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:02.512627  743829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 22:40:02.512642  743829 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:40:02.512725  743829 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:40:02.512829  743829 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-290425"
	I1027 22:40:02.512863  743829 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-290425"
	I1027 22:40:02.512894  743829 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:02.512898  743829 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:02.512892  743829 addons.go:69] Setting default-storageclass=true in profile "newest-cni-290425"
	I1027 22:40:02.513003  743829 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-290425"
	I1027 22:40:02.513451  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:02.513538  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:02.514107  743829 out.go:179] * Verifying Kubernetes components...
	I1027 22:40:02.514913  743829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:02.536441  743829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:40:01.951417  745063 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:40:01.961535  745063 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 22:40:01.961562  745063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:40:01.989872  745063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:40:02.223316  745063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:40:02.223826  745063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:40:02.223592  745063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-293335 minikube.k8s.io/updated_at=2025_10_27T22_40_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=auto-293335 minikube.k8s.io/primary=true
	I1027 22:40:02.305207  745063 ops.go:34] apiserver oom_adj: -16
	I1027 22:40:02.305233  745063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:40:02.806164  745063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:40:02.536489  743829 addons.go:238] Setting addon default-storageclass=true in "newest-cni-290425"
	I1027 22:40:02.536532  743829 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:02.537064  743829 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:02.537398  743829 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:40:02.537419  743829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:40:02.537474  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:02.566569  743829 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:40:02.566600  743829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:40:02.566663  743829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:02.569105  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:02.585558  743829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:02.607705  743829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 22:40:02.656397  743829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:02.689633  743829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:40:02.702340  743829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:40:02.782014  743829 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 22:40:02.783751  743829 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:40:02.783826  743829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:40:03.015838  743829 api_server.go:72] duration metric: took 503.151976ms to wait for apiserver process to appear ...
	I1027 22:40:03.015873  743829 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:40:03.015905  743829 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:03.021479  743829 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 22:40:03.022826  743829 api_server.go:141] control plane version: v1.34.1
	I1027 22:40:03.022869  743829 api_server.go:131] duration metric: took 6.987395ms to wait for apiserver health ...
	I1027 22:40:03.022889  743829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:40:03.024372  743829 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 22:40:03.025366  743829 addons.go:514] duration metric: took 512.639056ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 22:40:03.025724  743829 system_pods.go:59] 8 kube-system pods found
	I1027 22:40:03.025764  743829 system_pods.go:61] "coredns-66bc5c9577-hmtz5" [d0253fb1-e66b-448e-8b6d-e9882120ffd2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:40:03.025776  743829 system_pods.go:61] "etcd-newest-cni-290425" [fa08a886-4040-46e0-9e58-975345432c48] Running
	I1027 22:40:03.025791  743829 system_pods.go:61] "kindnet-pk58m" [12e1d8a7-de11-4047-85f7-4832c3a7e80c] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 22:40:03.025800  743829 system_pods.go:61] "kube-apiserver-newest-cni-290425" [36218ab8-7cc4-4487-9dcd-5186adc9d4c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:40:03.025814  743829 system_pods.go:61] "kube-controller-manager-newest-cni-290425" [494bc2f7-8ec5-40bb-bd19-0c4a96b93532] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:40:03.025826  743829 system_pods.go:61] "kube-proxy-d866g" [ba6a46e3-367b-40d2-a919-35b062379af3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 22:40:03.025837  743829 system_pods.go:61] "kube-scheduler-newest-cni-290425" [69cd3450-9c48-455d-9bc0-b8f45eeb37c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:40:03.025848  743829 system_pods.go:61] "storage-provisioner" [d8b271bc-46b6-4d99-a6a2-27907f5afc55] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:40:03.025870  743829 system_pods.go:74] duration metric: took 2.973547ms to wait for pod list to return data ...
	I1027 22:40:03.025882  743829 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:40:03.028173  743829 default_sa.go:45] found service account: "default"
	I1027 22:40:03.028192  743829 default_sa.go:55] duration metric: took 2.300724ms for default service account to be created ...
	I1027 22:40:03.028208  743829 kubeadm.go:587] duration metric: took 515.528472ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 22:40:03.028230  743829 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:40:03.030441  743829 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:40:03.030474  743829 node_conditions.go:123] node cpu capacity is 8
	I1027 22:40:03.030491  743829 node_conditions.go:105] duration metric: took 2.254404ms to run NodePressure ...
	I1027 22:40:03.030505  743829 start.go:242] waiting for startup goroutines ...
	I1027 22:40:03.287367  743829 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-290425" context rescaled to 1 replicas
	I1027 22:40:03.287410  743829 start.go:247] waiting for cluster config update ...
	I1027 22:40:03.287424  743829 start.go:256] writing updated cluster config ...
	I1027 22:40:03.287741  743829 ssh_runner.go:195] Run: rm -f paused
	I1027 22:40:03.344656  743829 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:40:03.346778  743829 out.go:179] * Done! kubectl is now configured to use "newest-cni-290425" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.782915813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.783155005Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0d9a88f6-7ab9-4b9f-aafb-695b1cce41f0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.789301393Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=20857f9f-abb3-48d6-9a07-daeac20e211f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.789852562Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.791059406Z" level=info msg="Ran pod sandbox d741fdd80c20adfbaaaacc1796ce3d0414d51e7ca0c089442917731f34eb29e3 with infra container: kube-system/kindnet-pk58m/POD" id=0d9a88f6-7ab9-4b9f-aafb-695b1cce41f0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.791679672Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.793801218Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=02cf5142-a798-4cbf-bb41-071f7060e5b8 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.794409364Z" level=info msg="Ran pod sandbox 5a72bb22e625f7de5d63d3d368f3f3ab6561cdc237d605a0918b43c720462514 with infra container: kube-system/kube-proxy-d866g/POD" id=20857f9f-abb3-48d6-9a07-daeac20e211f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.797157395Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=76e0ef7b-81d3-499d-acf8-5284f97ba539 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.797191255Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8e62c65f-bbfa-4f93-a64f-b4a103aafd4c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.798924402Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=67754e86-bc96-4735-aeaf-92831da03df2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.803108067Z" level=info msg="Creating container: kube-system/kindnet-pk58m/kindnet-cni" id=e3332b79-41f8-44ea-94c0-7799c39cd8ff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.803225613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.803910872Z" level=info msg="Creating container: kube-system/kube-proxy-d866g/kube-proxy" id=0e784308-1664-4676-9bbf-23357b758169 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.804064009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.808735979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.80937592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.811673288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.812796484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.847081229Z" level=info msg="Created container db0ae732e711d5e33e932d29bcb241cbc15533a1c317cfc595821caa567580ee: kube-system/kindnet-pk58m/kindnet-cni" id=e3332b79-41f8-44ea-94c0-7799c39cd8ff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.848506466Z" level=info msg="Starting container: db0ae732e711d5e33e932d29bcb241cbc15533a1c317cfc595821caa567580ee" id=339e2339-598c-4fd7-8512-817c804aeaa4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.850802888Z" level=info msg="Created container cfde26d84a8a7f47fe6be67a477ff2e466db9a1398a4cb0d6cd7b5ebfdbfd5c5: kube-system/kube-proxy-d866g/kube-proxy" id=0e784308-1664-4676-9bbf-23357b758169 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.851258221Z" level=info msg="Started container" PID=1608 containerID=db0ae732e711d5e33e932d29bcb241cbc15533a1c317cfc595821caa567580ee description=kube-system/kindnet-pk58m/kindnet-cni id=339e2339-598c-4fd7-8512-817c804aeaa4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d741fdd80c20adfbaaaacc1796ce3d0414d51e7ca0c089442917731f34eb29e3
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.851405185Z" level=info msg="Starting container: cfde26d84a8a7f47fe6be67a477ff2e466db9a1398a4cb0d6cd7b5ebfdbfd5c5" id=1dec54f0-2994-4c36-8efa-ce6f5071aa65 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:02 newest-cni-290425 crio[769]: time="2025-10-27T22:40:02.855197884Z" level=info msg="Started container" PID=1609 containerID=cfde26d84a8a7f47fe6be67a477ff2e466db9a1398a4cb0d6cd7b5ebfdbfd5c5 description=kube-system/kube-proxy-d866g/kube-proxy id=1dec54f0-2994-4c36-8efa-ce6f5071aa65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a72bb22e625f7de5d63d3d368f3f3ab6561cdc237d605a0918b43c720462514
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cfde26d84a8a7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   5a72bb22e625f       kube-proxy-d866g                            kube-system
	db0ae732e711d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   d741fdd80c20a       kindnet-pk58m                               kube-system
	3b0a91dda06a1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   22fb474018df9       etcd-newest-cni-290425                      kube-system
	1a1925afa0582       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   fda9e68e1ef7c       kube-controller-manager-newest-cni-290425   kube-system
	d6e6e9cc207ae       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   53cd428c731c9       kube-apiserver-newest-cni-290425            kube-system
	8dba037620199       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   33f7df7f4f8e2       kube-scheduler-newest-cni-290425            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-290425
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-290425
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=newest-cni-290425
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_39_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:39:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-290425
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:39:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:39:56 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:39:56 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:39:56 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 22:39:56 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-290425
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e391c1d9-7d95-420d-8069-436e90adb7af
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-290425                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-pk58m                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-290425             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-290425    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-d866g                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-290425             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-290425 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-290425 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-290425 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-290425 event: Registered Node newest-cni-290425 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [3b0a91dda06a1134a23dccfa7dc1568a1a2a47718cf98be1f7819c17f1c3b817] <==
	{"level":"warn","ts":"2025-10-27T22:39:53.845823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.853835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.860087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.867656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.876610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.884641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.893375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.906195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.913544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.922062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.930327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.940092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.947708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.957890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.964834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.971810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.979042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.985031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.992302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:53.999194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:54.005194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:54.021246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:54.027138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:54.033288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:39:54.085056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58162","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:40:04 up  2:22,  0 user,  load average: 5.53, 3.30, 2.94
	Linux newest-cni-290425 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [db0ae732e711d5e33e932d29bcb241cbc15533a1c317cfc595821caa567580ee] <==
	I1027 22:40:03.118625       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:40:03.118907       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 22:40:03.119065       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:40:03.119082       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:40:03.119114       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:40:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:40:03.321343       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:40:03.321404       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:40:03.321419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:40:03.436652       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:40:03.822112       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:40:03.822208       1 metrics.go:72] Registering metrics
	I1027 22:40:03.822279       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [d6e6e9cc207aef6d4871604a8d372871a3ea130d127d5765d71782125ffcdb6a] <==
	I1027 22:39:54.569535       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 22:39:54.569574       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 22:39:54.569603       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1027 22:39:54.573886       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:39:54.573990       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 22:39:54.578831       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:39:54.579630       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 22:39:54.748869       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:39:55.468571       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 22:39:55.473070       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 22:39:55.473090       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:39:55.963390       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:39:56.001467       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:39:56.073109       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 22:39:56.081027       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1027 22:39:56.082119       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:39:56.086861       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:39:56.507996       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:39:56.923771       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:39:56.933417       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 22:39:56.940964       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:40:02.410310       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:40:02.459850       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 22:40:02.510856       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:40:02.515743       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1a1925afa0582afc9397a0a08d479d08552775e16c38416c714eae8c9127f9e5] <==
	I1027 22:40:01.506518       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 22:40:01.507666       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:40:01.507686       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:40:01.507713       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:40:01.507747       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 22:40:01.507772       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:40:01.507785       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:40:01.507883       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 22:40:01.507903       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 22:40:01.510347       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:40:01.510865       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 22:40:01.510871       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 22:40:01.510935       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:40:01.511010       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:40:01.511017       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:40:01.511024       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:40:01.513126       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:40:01.518394       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-290425" podCIDRs=["10.42.0.0/24"]
	I1027 22:40:01.520326       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:40:01.520390       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:40:01.522581       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:40:01.522597       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 22:40:01.522604       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 22:40:01.527847       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:40:01.538416       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cfde26d84a8a7f47fe6be67a477ff2e466db9a1398a4cb0d6cd7b5ebfdbfd5c5] <==
	I1027 22:40:02.900204       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:40:02.970066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:40:03.071131       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:40:03.071197       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 22:40:03.071280       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:40:03.091795       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:40:03.091851       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:40:03.097381       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:40:03.097790       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:40:03.097826       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:40:03.100028       1 config.go:309] "Starting node config controller"
	I1027 22:40:03.100489       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:40:03.100507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:40:03.100676       1 config.go:200] "Starting service config controller"
	I1027 22:40:03.100692       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:40:03.100882       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:40:03.100898       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:40:03.101147       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:40:03.101182       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:40:03.200957       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:40:03.201891       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:40:03.201988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8dba03762019924146b3d75b70ba3b71bc7f395aaa12a06a09c4959760673809] <==
	E1027 22:39:54.515524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:39:54.515576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:39:54.515573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:39:54.515686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:39:54.515727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:39:54.515720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:39:54.515834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:39:54.515891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:39:54.515924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:39:54.516009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:39:54.516056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:39:54.516068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:39:54.516207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:39:54.516215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:39:55.372505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:39:55.375828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:39:55.380837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:39:55.389052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:39:55.420745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 22:39:55.536744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:39:55.663371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:39:55.667602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:39:55.698989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 22:39:55.714277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1027 22:39:58.212239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:39:56 newest-cni-290425 kubelet[1311]: I1027 22:39:56.965928    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1db01a87d8ffc75dbdd7be6d6868c01-etc-ca-certificates\") pod \"kube-controller-manager-newest-cni-290425\" (UID: \"f1db01a87d8ffc75dbdd7be6d6868c01\") " pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:39:56 newest-cni-290425 kubelet[1311]: I1027 22:39:56.965965    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1db01a87d8ffc75dbdd7be6d6868c01-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-290425\" (UID: \"f1db01a87d8ffc75dbdd7be6d6868c01\") " pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:39:56 newest-cni-290425 kubelet[1311]: I1027 22:39:56.965991    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1db01a87d8ffc75dbdd7be6d6868c01-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-290425\" (UID: \"f1db01a87d8ffc75dbdd7be6d6868c01\") " pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:39:57 newest-cni-290425 kubelet[1311]: I1027 22:39:57.757932    1311 apiserver.go:52] "Watching apiserver"
	Oct 27 22:39:57 newest-cni-290425 kubelet[1311]: I1027 22:39:57.765564    1311 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 22:39:57 newest-cni-290425 kubelet[1311]: I1027 22:39:57.801250    1311 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-290425"
	Oct 27 22:39:57 newest-cni-290425 kubelet[1311]: I1027 22:39:57.802167    1311 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-290425"
	Oct 27 22:39:57 newest-cni-290425 kubelet[1311]: E1027 22:39:57.808757    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-290425\" already exists" pod="kube-system/kube-scheduler-newest-cni-290425"
	Oct 27 22:39:57 newest-cni-290425 kubelet[1311]: E1027 22:39:57.809331    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-290425\" already exists" pod="kube-system/etcd-newest-cni-290425"
	Oct 27 22:39:57 newest-cni-290425 kubelet[1311]: I1027 22:39:57.822634    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-290425" podStartSLOduration=1.822602228 podStartE2EDuration="1.822602228s" podCreationTimestamp="2025-10-27 22:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:57.822456424 +0000 UTC m=+1.123701468" watchObservedRunningTime="2025-10-27 22:39:57.822602228 +0000 UTC m=+1.123847273"
	Oct 27 22:39:57 newest-cni-290425 kubelet[1311]: I1027 22:39:57.846813    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-290425" podStartSLOduration=1.8467914859999999 podStartE2EDuration="1.846791486s" podCreationTimestamp="2025-10-27 22:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:57.834030267 +0000 UTC m=+1.135275311" watchObservedRunningTime="2025-10-27 22:39:57.846791486 +0000 UTC m=+1.148036529"
	Oct 27 22:39:57 newest-cni-290425 kubelet[1311]: I1027 22:39:57.846956    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-290425" podStartSLOduration=1.84693555 podStartE2EDuration="1.84693555s" podCreationTimestamp="2025-10-27 22:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:57.84673332 +0000 UTC m=+1.147978364" watchObservedRunningTime="2025-10-27 22:39:57.84693555 +0000 UTC m=+1.148180594"
	Oct 27 22:39:57 newest-cni-290425 kubelet[1311]: I1027 22:39:57.870217    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-290425" podStartSLOduration=1.870182689 podStartE2EDuration="1.870182689s" podCreationTimestamp="2025-10-27 22:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:39:57.857616373 +0000 UTC m=+1.158861417" watchObservedRunningTime="2025-10-27 22:39:57.870182689 +0000 UTC m=+1.171427734"
	Oct 27 22:40:01 newest-cni-290425 kubelet[1311]: I1027 22:40:01.618401    1311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 22:40:01 newest-cni-290425 kubelet[1311]: I1027 22:40:01.619161    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 22:40:02 newest-cni-290425 kubelet[1311]: I1027 22:40:02.507499    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlwmx\" (UniqueName: \"kubernetes.io/projected/12e1d8a7-de11-4047-85f7-4832c3a7e80c-kube-api-access-qlwmx\") pod \"kindnet-pk58m\" (UID: \"12e1d8a7-de11-4047-85f7-4832c3a7e80c\") " pod="kube-system/kindnet-pk58m"
	Oct 27 22:40:02 newest-cni-290425 kubelet[1311]: I1027 22:40:02.507550    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba6a46e3-367b-40d2-a919-35b062379af3-kube-proxy\") pod \"kube-proxy-d866g\" (UID: \"ba6a46e3-367b-40d2-a919-35b062379af3\") " pod="kube-system/kube-proxy-d866g"
	Oct 27 22:40:02 newest-cni-290425 kubelet[1311]: I1027 22:40:02.507577    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6x8\" (UniqueName: \"kubernetes.io/projected/ba6a46e3-367b-40d2-a919-35b062379af3-kube-api-access-mb6x8\") pod \"kube-proxy-d866g\" (UID: \"ba6a46e3-367b-40d2-a919-35b062379af3\") " pod="kube-system/kube-proxy-d866g"
	Oct 27 22:40:02 newest-cni-290425 kubelet[1311]: I1027 22:40:02.507673    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12e1d8a7-de11-4047-85f7-4832c3a7e80c-xtables-lock\") pod \"kindnet-pk58m\" (UID: \"12e1d8a7-de11-4047-85f7-4832c3a7e80c\") " pod="kube-system/kindnet-pk58m"
	Oct 27 22:40:02 newest-cni-290425 kubelet[1311]: I1027 22:40:02.507737    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12e1d8a7-de11-4047-85f7-4832c3a7e80c-lib-modules\") pod \"kindnet-pk58m\" (UID: \"12e1d8a7-de11-4047-85f7-4832c3a7e80c\") " pod="kube-system/kindnet-pk58m"
	Oct 27 22:40:02 newest-cni-290425 kubelet[1311]: I1027 22:40:02.507759    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba6a46e3-367b-40d2-a919-35b062379af3-xtables-lock\") pod \"kube-proxy-d866g\" (UID: \"ba6a46e3-367b-40d2-a919-35b062379af3\") " pod="kube-system/kube-proxy-d866g"
	Oct 27 22:40:02 newest-cni-290425 kubelet[1311]: I1027 22:40:02.507781    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba6a46e3-367b-40d2-a919-35b062379af3-lib-modules\") pod \"kube-proxy-d866g\" (UID: \"ba6a46e3-367b-40d2-a919-35b062379af3\") " pod="kube-system/kube-proxy-d866g"
	Oct 27 22:40:02 newest-cni-290425 kubelet[1311]: I1027 22:40:02.507808    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/12e1d8a7-de11-4047-85f7-4832c3a7e80c-cni-cfg\") pod \"kindnet-pk58m\" (UID: \"12e1d8a7-de11-4047-85f7-4832c3a7e80c\") " pod="kube-system/kindnet-pk58m"
	Oct 27 22:40:03 newest-cni-290425 kubelet[1311]: I1027 22:40:03.859490    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pk58m" podStartSLOduration=1.859466382 podStartE2EDuration="1.859466382s" podCreationTimestamp="2025-10-27 22:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:40:03.859276777 +0000 UTC m=+7.160521820" watchObservedRunningTime="2025-10-27 22:40:03.859466382 +0000 UTC m=+7.160711425"
	Oct 27 22:40:03 newest-cni-290425 kubelet[1311]: I1027 22:40:03.859644    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d866g" podStartSLOduration=1.859635233 podStartE2EDuration="1.859635233s" podCreationTimestamp="2025-10-27 22:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 22:40:03.839057096 +0000 UTC m=+7.140302142" watchObservedRunningTime="2025-10-27 22:40:03.859635233 +0000 UTC m=+7.160880278"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-290425 -n newest-cni-290425
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-290425 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-hmtz5 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-290425 describe pod coredns-66bc5c9577-hmtz5 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-290425 describe pod coredns-66bc5c9577-hmtz5 storage-provisioner: exit status 1 (69.892402ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-hmtz5" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-290425 describe pod coredns-66bc5c9577-hmtz5 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-829976 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-829976 --alsologtostderr -v=1: exit status 80 (2.42742397s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-829976 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:40:33.311088  758708 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:40:33.312044  758708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:33.312059  758708 out.go:374] Setting ErrFile to fd 2...
	I1027 22:40:33.312065  758708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:33.312769  758708 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:40:33.313330  758708 out.go:368] Setting JSON to false
	I1027 22:40:33.313434  758708 mustload.go:66] Loading cluster: embed-certs-829976
	I1027 22:40:33.314622  758708 config.go:182] Loaded profile config "embed-certs-829976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:33.315594  758708 cli_runner.go:164] Run: docker container inspect embed-certs-829976 --format={{.State.Status}}
	I1027 22:40:33.338791  758708 host.go:66] Checking if "embed-certs-829976" exists ...
	I1027 22:40:33.339209  758708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:33.420596  758708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-27 22:40:33.406689389 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:33.421456  758708 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-829976 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 22:40:33.422780  758708 out.go:179] * Pausing node embed-certs-829976 ... 
	I1027 22:40:33.424139  758708 host.go:66] Checking if "embed-certs-829976" exists ...
	I1027 22:40:33.424495  758708 ssh_runner.go:195] Run: systemctl --version
	I1027 22:40:33.424573  758708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-829976
	I1027 22:40:33.446641  758708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/embed-certs-829976/id_rsa Username:docker}
	I1027 22:40:33.547954  758708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:40:33.573486  758708 pause.go:52] kubelet running: true
	I1027 22:40:33.573559  758708 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:40:33.785313  758708 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:40:33.785396  758708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:40:33.856627  758708 cri.go:89] found id: "a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05"
	I1027 22:40:33.856654  758708 cri.go:89] found id: "9447a459809287a6975180679ff62303ef8cb2896cb8860c8aeb82b1b5e8bc3c"
	I1027 22:40:33.856661  758708 cri.go:89] found id: "cbd067eb1679660eec922b92c264ea4b4019dcf99c1f78856e648b52f27cb061"
	I1027 22:40:33.856667  758708 cri.go:89] found id: "51e1b51f1e8d7456d9abb387421db6e13e287cb56c376344f144076a8be30b1b"
	I1027 22:40:33.856672  758708 cri.go:89] found id: "36caab8434bebf2193c8f305a5d81c1aa34986386d7338a3bbd3c750f1b6e6db"
	I1027 22:40:33.856676  758708 cri.go:89] found id: "2f44a2722d5ccd7616df1090c6bb0dbee4aa51ec06009ab3a0c5b8d4976586ea"
	I1027 22:40:33.856679  758708 cri.go:89] found id: "45a7ab4d457895149bd74409ca1cf2067d30d698e93850bc8e3ded4ce106bbab"
	I1027 22:40:33.856681  758708 cri.go:89] found id: "9ebb5d429db0f5d2cfac0c88b414dd785a0b2d57b9fcfeb926197b670710530b"
	I1027 22:40:33.856684  758708 cri.go:89] found id: "e617c18783204a4f1e575bdec7825512002bad31cb3b04208481ca9f4c563564"
	I1027 22:40:33.856689  758708 cri.go:89] found id: "8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c"
	I1027 22:40:33.856695  758708 cri.go:89] found id: "ba17aac76ebe9be83a75950a4ef6b7b6315fa93827de262d9593d4e97bbdf936"
	I1027 22:40:33.856698  758708 cri.go:89] found id: ""
	I1027 22:40:33.856740  758708 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:40:33.869054  758708 retry.go:31] will retry after 249.514768ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:33Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:40:34.119541  758708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:40:34.133049  758708 pause.go:52] kubelet running: false
	I1027 22:40:34.133113  758708 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:40:34.280708  758708 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:40:34.280799  758708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:40:34.363888  758708 cri.go:89] found id: "a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05"
	I1027 22:40:34.363913  758708 cri.go:89] found id: "9447a459809287a6975180679ff62303ef8cb2896cb8860c8aeb82b1b5e8bc3c"
	I1027 22:40:34.363918  758708 cri.go:89] found id: "cbd067eb1679660eec922b92c264ea4b4019dcf99c1f78856e648b52f27cb061"
	I1027 22:40:34.363923  758708 cri.go:89] found id: "51e1b51f1e8d7456d9abb387421db6e13e287cb56c376344f144076a8be30b1b"
	I1027 22:40:34.363927  758708 cri.go:89] found id: "36caab8434bebf2193c8f305a5d81c1aa34986386d7338a3bbd3c750f1b6e6db"
	I1027 22:40:34.363932  758708 cri.go:89] found id: "2f44a2722d5ccd7616df1090c6bb0dbee4aa51ec06009ab3a0c5b8d4976586ea"
	I1027 22:40:34.363936  758708 cri.go:89] found id: "45a7ab4d457895149bd74409ca1cf2067d30d698e93850bc8e3ded4ce106bbab"
	I1027 22:40:34.363959  758708 cri.go:89] found id: "9ebb5d429db0f5d2cfac0c88b414dd785a0b2d57b9fcfeb926197b670710530b"
	I1027 22:40:34.363964  758708 cri.go:89] found id: "e617c18783204a4f1e575bdec7825512002bad31cb3b04208481ca9f4c563564"
	I1027 22:40:34.363976  758708 cri.go:89] found id: "8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c"
	I1027 22:40:34.363980  758708 cri.go:89] found id: "ba17aac76ebe9be83a75950a4ef6b7b6315fa93827de262d9593d4e97bbdf936"
	I1027 22:40:34.363984  758708 cri.go:89] found id: ""
	I1027 22:40:34.364029  758708 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:40:34.380389  758708 retry.go:31] will retry after 354.304431ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:34Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:40:34.734930  758708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:40:34.748030  758708 pause.go:52] kubelet running: false
	I1027 22:40:34.748087  758708 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:40:34.896468  758708 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:40:34.896567  758708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:40:34.970615  758708 cri.go:89] found id: "a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05"
	I1027 22:40:34.970634  758708 cri.go:89] found id: "9447a459809287a6975180679ff62303ef8cb2896cb8860c8aeb82b1b5e8bc3c"
	I1027 22:40:34.970637  758708 cri.go:89] found id: "cbd067eb1679660eec922b92c264ea4b4019dcf99c1f78856e648b52f27cb061"
	I1027 22:40:34.970640  758708 cri.go:89] found id: "51e1b51f1e8d7456d9abb387421db6e13e287cb56c376344f144076a8be30b1b"
	I1027 22:40:34.970643  758708 cri.go:89] found id: "36caab8434bebf2193c8f305a5d81c1aa34986386d7338a3bbd3c750f1b6e6db"
	I1027 22:40:34.970646  758708 cri.go:89] found id: "2f44a2722d5ccd7616df1090c6bb0dbee4aa51ec06009ab3a0c5b8d4976586ea"
	I1027 22:40:34.970662  758708 cri.go:89] found id: "45a7ab4d457895149bd74409ca1cf2067d30d698e93850bc8e3ded4ce106bbab"
	I1027 22:40:34.970665  758708 cri.go:89] found id: "9ebb5d429db0f5d2cfac0c88b414dd785a0b2d57b9fcfeb926197b670710530b"
	I1027 22:40:34.970667  758708 cri.go:89] found id: "e617c18783204a4f1e575bdec7825512002bad31cb3b04208481ca9f4c563564"
	I1027 22:40:34.970678  758708 cri.go:89] found id: "8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c"
	I1027 22:40:34.970681  758708 cri.go:89] found id: "ba17aac76ebe9be83a75950a4ef6b7b6315fa93827de262d9593d4e97bbdf936"
	I1027 22:40:34.970684  758708 cri.go:89] found id: ""
	I1027 22:40:34.970718  758708 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:40:34.983567  758708 retry.go:31] will retry after 347.142997ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:34Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:40:35.331081  758708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:40:35.350894  758708 pause.go:52] kubelet running: false
	I1027 22:40:35.350986  758708 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:40:35.541496  758708 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:40:35.541600  758708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:40:35.625724  758708 cri.go:89] found id: "a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05"
	I1027 22:40:35.625746  758708 cri.go:89] found id: "9447a459809287a6975180679ff62303ef8cb2896cb8860c8aeb82b1b5e8bc3c"
	I1027 22:40:35.625749  758708 cri.go:89] found id: "cbd067eb1679660eec922b92c264ea4b4019dcf99c1f78856e648b52f27cb061"
	I1027 22:40:35.625753  758708 cri.go:89] found id: "51e1b51f1e8d7456d9abb387421db6e13e287cb56c376344f144076a8be30b1b"
	I1027 22:40:35.625755  758708 cri.go:89] found id: "36caab8434bebf2193c8f305a5d81c1aa34986386d7338a3bbd3c750f1b6e6db"
	I1027 22:40:35.625758  758708 cri.go:89] found id: "2f44a2722d5ccd7616df1090c6bb0dbee4aa51ec06009ab3a0c5b8d4976586ea"
	I1027 22:40:35.625761  758708 cri.go:89] found id: "45a7ab4d457895149bd74409ca1cf2067d30d698e93850bc8e3ded4ce106bbab"
	I1027 22:40:35.625764  758708 cri.go:89] found id: "9ebb5d429db0f5d2cfac0c88b414dd785a0b2d57b9fcfeb926197b670710530b"
	I1027 22:40:35.625766  758708 cri.go:89] found id: "e617c18783204a4f1e575bdec7825512002bad31cb3b04208481ca9f4c563564"
	I1027 22:40:35.625772  758708 cri.go:89] found id: "8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c"
	I1027 22:40:35.625774  758708 cri.go:89] found id: "ba17aac76ebe9be83a75950a4ef6b7b6315fa93827de262d9593d4e97bbdf936"
	I1027 22:40:35.625777  758708 cri.go:89] found id: ""
	I1027 22:40:35.625831  758708 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:40:35.641395  758708 out.go:203] 
	W1027 22:40:35.642617  758708 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:40:35.642645  758708 out.go:285] * 
	* 
	W1027 22:40:35.649525  758708 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:40:35.650914  758708 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-829976 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-829976
helpers_test.go:243: (dbg) docker inspect embed-certs-829976:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4",
	        "Created": "2025-10-27T22:38:24.135878096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:39:31.028103078Z",
	            "FinishedAt": "2025-10-27T22:39:29.983394918Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/hosts",
	        "LogPath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4-json.log",
	        "Name": "/embed-certs-829976",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-829976:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-829976",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4",
	                "LowerDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-829976",
	                "Source": "/var/lib/docker/volumes/embed-certs-829976/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-829976",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-829976",
	                "name.minikube.sigs.k8s.io": "embed-certs-829976",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44e37758f1abe5f5a5c831f5f14a8dc85d9323346b7b61ced077ed068a66e5c7",
	            "SandboxKey": "/var/run/docker/netns/44e37758f1ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-829976": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:26:0b:40:d3:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19326983879b440afd91ddad1f1a29b86b26ac185f059a173d6110952f20d348",
	                    "EndpointID": "a1c0233c9157dcb94206d0f164b4c48815960e9dac53b07132a982c3b0a5e539",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-829976",
	                        "faeaf04da269"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829976 -n embed-certs-829976
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829976 -n embed-certs-829976: exit status 2 (389.928465ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-829976 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-829976 logs -n 25: (1.258213629s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ image   │ no-preload-188814 image list --format=json                                                                                                                                                                                                    │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ pause   │ -p no-preload-188814 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-829976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ delete  │ -p kubernetes-upgrade-695499                                                                                                                                                                                                                  │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ delete  │ -p no-preload-188814                                                                                                                                                                                                                          │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ delete  │ -p no-preload-188814                                                                                                                                                                                                                          │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p auto-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-927034 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-290425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ stop    │ -p newest-cni-290425 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927034 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 pgrep -a kubelet                                                                                                                                                                                                               │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable dashboard -p newest-cni-290425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ image   │ embed-certs-829976 image list --format=json                                                                                                                                                                                                   │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ pause   │ -p embed-certs-829976 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ image   │ newest-cni-290425 image list --format=json                                                                                                                                                                                                    │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ pause   │ -p newest-cni-290425 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo cat /etc/nsswitch.conf                                                                                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/hosts                                                                                                                                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:40:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:40:24.438209  756848 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:40:24.438329  756848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:24.438340  756848 out.go:374] Setting ErrFile to fd 2...
	I1027 22:40:24.438345  756848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:24.438673  756848 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:40:24.439297  756848 out.go:368] Setting JSON to false
	I1027 22:40:24.440841  756848 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8563,"bootTime":1761596261,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:40:24.440961  756848 start.go:143] virtualization: kvm guest
	I1027 22:40:24.442921  756848 out.go:179] * [newest-cni-290425] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:40:24.445592  756848 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:40:24.445629  756848 notify.go:221] Checking for updates...
	I1027 22:40:24.448124  756848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:40:24.449565  756848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:24.451090  756848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:40:24.452160  756848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:40:24.456462  756848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:40:24.458338  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:24.459094  756848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:40:24.488803  756848 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:40:24.488892  756848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:24.557828  756848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:40:24.546418647 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:24.557998  756848 docker.go:318] overlay module found
	I1027 22:40:24.559462  756848 out.go:179] * Using the docker driver based on existing profile
	I1027 22:40:24.560558  756848 start.go:307] selected driver: docker
	I1027 22:40:24.560578  756848 start.go:928] validating driver "docker" against &{Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:24.560718  756848 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:40:24.561602  756848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:24.632177  756848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:40:24.620016626 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:24.632569  756848 start_flags.go:1010] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 22:40:24.632600  756848 cni.go:84] Creating CNI manager for ""
	I1027 22:40:24.632673  756848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:40:24.632732  756848 start.go:351] cluster config:
	{Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:24.634382  756848 out.go:179] * Starting "newest-cni-290425" primary control-plane node in "newest-cni-290425" cluster
	I1027 22:40:24.635369  756848 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:40:24.636382  756848 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:40:24.637272  756848 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:24.637317  756848 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:40:24.637329  756848 cache.go:59] Caching tarball of preloaded images
	I1027 22:40:24.637336  756848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:40:24.637435  756848 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:40:24.637450  756848 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:40:24.637576  756848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/config.json ...
	I1027 22:40:24.659489  756848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:40:24.659511  756848 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:40:24.659527  756848 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:40:24.659550  756848 start.go:360] acquireMachinesLock for newest-cni-290425: {Name:mk4e0aa51aaa1a604f2ac1e14d4e9ad4994a6e85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:40:24.659621  756848 start.go:364] duration metric: took 41.13µs to acquireMachinesLock for "newest-cni-290425"
	I1027 22:40:24.659640  756848 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:40:24.659645  756848 fix.go:55] fixHost starting: 
	I1027 22:40:24.659871  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:24.679073  756848 fix.go:113] recreateIfNeeded on newest-cni-290425: state=Stopped err=<nil>
	W1027 22:40:24.679130  756848 fix.go:139] unexpected machine state, will restart: <nil>
	W1027 22:40:24.188623  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:26.687852  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:24.681022  756848 out.go:252] * Restarting existing docker container for "newest-cni-290425" ...
	I1027 22:40:24.681102  756848 cli_runner.go:164] Run: docker start newest-cni-290425
	I1027 22:40:24.992255  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:25.014514  756848 kic.go:430] container "newest-cni-290425" state is running.
	I1027 22:40:25.015046  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:25.038668  756848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/config.json ...
	I1027 22:40:25.038987  756848 machine.go:94] provisionDockerMachine start ...
	I1027 22:40:25.039099  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:25.061826  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:25.062260  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:25.062285  756848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:40:25.063188  756848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45146->127.0.0.1:33103: read: connection reset by peer
	I1027 22:40:28.204458  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:40:28.204492  756848 ubuntu.go:182] provisioning hostname "newest-cni-290425"
	I1027 22:40:28.204559  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.222514  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.222737  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.222759  756848 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-290425 && echo "newest-cni-290425" | sudo tee /etc/hostname
	I1027 22:40:28.375236  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:40:28.375318  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.392770  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.393063  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.393082  756848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-290425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-290425/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-290425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:40:28.533683  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:40:28.533712  756848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:40:28.533740  756848 ubuntu.go:190] setting up certificates
	I1027 22:40:28.533756  756848 provision.go:84] configureAuth start
	I1027 22:40:28.533832  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:28.551092  756848 provision.go:143] copyHostCerts
	I1027 22:40:28.551157  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:40:28.551183  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:40:28.551262  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:40:28.551424  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:40:28.551439  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:40:28.551489  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:40:28.551578  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:40:28.551589  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:40:28.551627  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:40:28.551720  756848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-290425 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-290425]
	I1027 22:40:28.786512  756848 provision.go:177] copyRemoteCerts
	I1027 22:40:28.786589  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:40:28.786645  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.804351  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:28.905399  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:40:28.923296  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:40:28.940336  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:40:28.958758  756848 provision.go:87] duration metric: took 424.98667ms to configureAuth
	I1027 22:40:28.958786  756848 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:40:28.959034  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:28.959153  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.977021  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.977337  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.977362  756848 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:40:29.254601  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:40:29.254630  756848 machine.go:97] duration metric: took 4.215620835s to provisionDockerMachine
	I1027 22:40:29.254645  756848 start.go:293] postStartSetup for "newest-cni-290425" (driver="docker")
	I1027 22:40:29.254658  756848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:40:29.254744  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:40:29.254799  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.272656  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.373656  756848 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:40:29.377346  756848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:40:29.377381  756848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:40:29.377394  756848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:40:29.377439  756848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:40:29.377507  756848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:40:29.377598  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:40:29.385749  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:29.403339  756848 start.go:296] duration metric: took 148.678819ms for postStartSetup
	I1027 22:40:29.403416  756848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:40:29.403473  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.421865  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.520183  756848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:40:29.524936  756848 fix.go:57] duration metric: took 4.865280599s for fixHost
	I1027 22:40:29.524989  756848 start.go:83] releasing machines lock for "newest-cni-290425", held for 4.865355811s
	I1027 22:40:29.525055  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:29.542221  756848 ssh_runner.go:195] Run: cat /version.json
	I1027 22:40:29.542269  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.542325  756848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:40:29.542380  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.560078  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.560376  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.658503  756848 ssh_runner.go:195] Run: systemctl --version
	I1027 22:40:29.714758  756848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:40:29.751819  756848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:40:29.757527  756848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:40:29.757592  756848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:40:29.766082  756848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:40:29.766107  756848 start.go:496] detecting cgroup driver to use...
	I1027 22:40:29.766144  756848 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:40:29.766201  756848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:40:29.782220  756848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:40:29.795704  756848 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:40:29.795756  756848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:40:29.811814  756848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:40:29.824770  756848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:40:29.911398  756848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:40:30.002621  756848 docker.go:234] disabling docker service ...
	I1027 22:40:30.002705  756848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:40:30.018425  756848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:40:30.032066  756848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:40:30.126259  756848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:40:30.224136  756848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:40:30.240695  756848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:40:30.262231  756848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:40:30.262309  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.272017  756848 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:40:30.272077  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.281097  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.290459  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.299765  756848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:40:30.308783  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.318037  756848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.326660  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.335545  756848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:40:30.343816  756848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:40:30.351923  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:30.438807  756848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:40:30.541588  756848 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:40:30.541647  756848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:40:30.545709  756848 start.go:564] Will wait 60s for crictl version
	I1027 22:40:30.545763  756848 ssh_runner.go:195] Run: which crictl
	I1027 22:40:30.549390  756848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:40:30.574840  756848 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:40:30.574912  756848 ssh_runner.go:195] Run: crio --version
	I1027 22:40:30.603907  756848 ssh_runner.go:195] Run: crio --version
	I1027 22:40:30.635251  756848 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:40:30.636309  756848 cli_runner.go:164] Run: docker network inspect newest-cni-290425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:40:30.652517  756848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 22:40:30.656856  756848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:30.668683  756848 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1027 22:40:30.669554  756848 kubeadm.go:884] updating cluster {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:40:30.669731  756848 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:30.669822  756848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:30.704544  756848 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:30.704566  756848 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:40:30.704611  756848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:30.734075  756848 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:30.734098  756848 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:40:30.734106  756848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 22:40:30.734202  756848 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-290425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:40:30.734273  756848 ssh_runner.go:195] Run: crio config
	I1027 22:40:30.780046  756848 cni.go:84] Creating CNI manager for ""
	I1027 22:40:30.780067  756848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:40:30.780090  756848 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 22:40:30.780113  756848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-290425 NodeName:newest-cni-290425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:40:30.780240  756848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-290425"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:40:30.780304  756848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:40:30.788709  756848 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:40:30.788776  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:40:30.796691  756848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 22:40:30.809324  756848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:40:30.821977  756848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1027 22:40:30.834850  756848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:40:30.838629  756848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:30.848598  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:30.930756  756848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:30.960505  756848 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425 for IP: 192.168.76.2
	I1027 22:40:30.960526  756848 certs.go:195] generating shared ca certs ...
	I1027 22:40:30.960549  756848 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:30.960716  756848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:40:30.960760  756848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:40:30.960770  756848 certs.go:257] generating profile certs ...
	I1027 22:40:30.960854  756848 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.key
	I1027 22:40:30.960928  756848 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67
	I1027 22:40:30.961028  756848 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key
	I1027 22:40:30.961171  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:40:30.961204  756848 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:40:30.961217  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:40:30.961254  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:40:30.961289  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:40:30.961318  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:40:30.961382  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:30.962311  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:40:30.982191  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:40:31.003485  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:40:31.024750  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:40:31.051339  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 22:40:31.070810  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:40:31.089035  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:40:31.107252  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:40:31.124793  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:40:31.142653  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:40:31.162599  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:40:31.180139  756848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:40:31.194578  756848 ssh_runner.go:195] Run: openssl version
	I1027 22:40:31.200775  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:40:31.210145  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.214047  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.214105  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.252428  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:40:31.261073  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:40:31.270127  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.274120  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.274183  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.309111  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:40:31.317698  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:40:31.326420  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.330243  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.330307  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.365724  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:40:31.374331  756848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:40:31.378340  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:40:31.413065  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:40:31.448812  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:40:31.492414  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:40:31.536913  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:40:31.581567  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:40:31.637412  756848 kubeadm.go:401] StartCluster: {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:31.637550  756848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:40:31.637610  756848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:40:31.673955  756848 cri.go:89] found id: "5c6f16a2765ac4bdb8db042d29939ff67bdd1db836137d98bd170e7d1e41a727"
	I1027 22:40:31.673983  756848 cri.go:89] found id: "e2e676795ba20aae505a22108af3c33b27b2039e426adf854bbcfe4ed785f295"
	I1027 22:40:31.673988  756848 cri.go:89] found id: "bcada78a58b8a8ca59f0601dbbe5b52ebef3f5b2e055602ea90f951529aca61f"
	I1027 22:40:31.673993  756848 cri.go:89] found id: "54cf126c5f01241f27207f3fdf1efb544769da6c6c0566f35c8387449126358d"
	I1027 22:40:31.673996  756848 cri.go:89] found id: ""
	I1027 22:40:31.674047  756848 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:40:31.687812  756848 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:31Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:40:31.687887  756848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:40:31.697214  756848 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:40:31.697231  756848 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:40:31.697274  756848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:40:31.705188  756848 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:40:31.706218  756848 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-290425" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:31.706815  756848 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-482142/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-290425" cluster setting kubeconfig missing "newest-cni-290425" context setting]
	I1027 22:40:31.708077  756848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.710194  756848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:40:31.719725  756848 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 22:40:31.719756  756848 kubeadm.go:602] duration metric: took 22.519377ms to restartPrimaryControlPlane
	I1027 22:40:31.719767  756848 kubeadm.go:403] duration metric: took 82.367104ms to StartCluster
	I1027 22:40:31.719783  756848 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.719848  756848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:31.722417  756848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.722691  756848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:40:31.722773  756848 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:40:31.722874  756848 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-290425"
	I1027 22:40:31.722893  756848 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-290425"
	W1027 22:40:31.722902  756848 addons.go:247] addon storage-provisioner should already be in state true
	I1027 22:40:31.722931  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.722937  756848 addons.go:69] Setting dashboard=true in profile "newest-cni-290425"
	I1027 22:40:31.722973  756848 addons.go:238] Setting addon dashboard=true in "newest-cni-290425"
	W1027 22:40:31.722982  756848 addons.go:247] addon dashboard should already be in state true
	I1027 22:40:31.722987  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:31.723014  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.723047  756848 addons.go:69] Setting default-storageclass=true in profile "newest-cni-290425"
	I1027 22:40:31.723064  756848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-290425"
	I1027 22:40:31.723353  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.723550  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.723800  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.725730  756848 out.go:179] * Verifying Kubernetes components...
	I1027 22:40:31.726934  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:31.749813  756848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:40:31.749929  756848 addons.go:238] Setting addon default-storageclass=true in "newest-cni-290425"
	W1027 22:40:31.749966  756848 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:40:31.750012  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.750560  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.750761  756848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 22:40:31.750784  756848 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:40:31.750805  756848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:40:31.750863  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.756414  756848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1027 22:40:29.188109  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:31.188378  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:31.757286  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 22:40:31.757307  756848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 22:40:31.757368  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.788482  756848 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:40:31.788523  756848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:40:31.788585  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.788473  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.791269  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.812300  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.876427  756848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:31.890087  756848 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:40:31.890171  756848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:40:31.904610  756848 api_server.go:72] duration metric: took 181.883596ms to wait for apiserver process to appear ...
	I1027 22:40:31.904641  756848 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:40:31.904675  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:31.911250  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:40:31.913745  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 22:40:31.913771  756848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 22:40:31.928922  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 22:40:31.928985  756848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 22:40:31.937602  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:40:31.944700  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 22:40:31.944729  756848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 22:40:31.965934  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 22:40:31.965991  756848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 22:40:31.983504  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 22:40:31.983534  756848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 22:40:32.000875  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 22:40:32.000897  756848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 22:40:32.015058  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 22:40:32.015175  756848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 22:40:32.028828  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 22:40:32.028864  756848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 22:40:32.042288  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:40:32.042313  756848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 22:40:32.055615  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:40:33.250603  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:40:33.250644  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:40:33.250661  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.259803  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:40:33.259841  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:40:33.405243  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.410997  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:33.411027  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:33.838488  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.927200802s)
	I1027 22:40:33.838547  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.900910209s)
	I1027 22:40:33.838682  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.783022367s)
	I1027 22:40:33.840157  756848 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-290425 addons enable metrics-server
	
	I1027 22:40:33.849803  756848 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 22:40:33.851036  756848 addons.go:514] duration metric: took 2.128273879s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 22:40:33.905759  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.909856  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:33.909880  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:34.405178  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:34.409922  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:34.409969  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:34.905382  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:34.910198  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 22:40:34.911208  756848 api_server.go:141] control plane version: v1.34.1
	I1027 22:40:34.911251  756848 api_server.go:131] duration metric: took 3.006601962s to wait for apiserver health ...
	I1027 22:40:34.911260  756848 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:40:34.915094  756848 system_pods.go:59] 8 kube-system pods found
	I1027 22:40:34.915146  756848 system_pods.go:61] "coredns-66bc5c9577-hmtz5" [d0253fb1-e66b-448e-8b6d-e9882120ffd2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:40:34.915160  756848 system_pods.go:61] "etcd-newest-cni-290425" [fa08a886-4040-46e0-9e58-975345432c48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:40:34.915178  756848 system_pods.go:61] "kindnet-pk58m" [12e1d8a7-de11-4047-85f7-4832c3a7e80c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 22:40:34.915190  756848 system_pods.go:61] "kube-apiserver-newest-cni-290425" [36218ab8-7cc4-4487-9dcd-5186adc9d4c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:40:34.915203  756848 system_pods.go:61] "kube-controller-manager-newest-cni-290425" [494bc2f7-8ec5-40bb-bd19-0c4a96b93532] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:40:34.915217  756848 system_pods.go:61] "kube-proxy-d866g" [ba6a46e3-367b-40d2-a919-35b062379af3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 22:40:34.915235  756848 system_pods.go:61] "kube-scheduler-newest-cni-290425" [69cd3450-9c48-455d-9bc0-b8f45eeb37c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:40:34.915246  756848 system_pods.go:61] "storage-provisioner" [d8b271bc-46b6-4d99-a6a2-27907f5afc55] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:40:34.915256  756848 system_pods.go:74] duration metric: took 3.987353ms to wait for pod list to return data ...
	I1027 22:40:34.915270  756848 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:40:34.917715  756848 default_sa.go:45] found service account: "default"
	I1027 22:40:34.917735  756848 default_sa.go:55] duration metric: took 2.459034ms for default service account to be created ...
	I1027 22:40:34.917746  756848 kubeadm.go:587] duration metric: took 3.195028043s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 22:40:34.917762  756848 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:40:34.920111  756848 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:40:34.920150  756848 node_conditions.go:123] node cpu capacity is 8
	I1027 22:40:34.920168  756848 node_conditions.go:105] duration metric: took 2.398457ms to run NodePressure ...
	I1027 22:40:34.920187  756848 start.go:242] waiting for startup goroutines ...
	I1027 22:40:34.920198  756848 start.go:247] waiting for cluster config update ...
	I1027 22:40:34.920210  756848 start.go:256] writing updated cluster config ...
	I1027 22:40:34.920542  756848 ssh_runner.go:195] Run: rm -f paused
	I1027 22:40:34.975966  756848 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:40:34.978357  756848 out.go:179] * Done! kubectl is now configured to use "newest-cni-290425" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 22:39:58 embed-certs-829976 crio[554]: time="2025-10-27T22:39:58.817070161Z" level=info msg="Created container a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05: kube-system/storage-provisioner/storage-provisioner" id=e6c65e25-0351-4d7e-966b-cbfa72ec7726 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:39:58 embed-certs-829976 crio[554]: time="2025-10-27T22:39:58.817711188Z" level=info msg="Starting container: a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05" id=6c7641c9-7bbc-4714-8ca9-87d4a149b953 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:39:58 embed-certs-829976 crio[554]: time="2025-10-27T22:39:58.819617848Z" level=info msg="Started container" PID=1749 containerID=a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05 description=kube-system/storage-provisioner/storage-provisioner id=6c7641c9-7bbc-4714-8ca9-87d4a149b953 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9d8b57a32c1d39ddc2a90f50b655a2ab3f2ba573afffaeb8f7ce811294a1b018
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.795963585Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d4b74e9b-b301-45d2-aaa7-11cdd58fb05f name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.799453127Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=11380d5b-9f8b-4e58-b5aa-273aa4e04596 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.802712203Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=7a58a280-570d-4cd0-b99b-ef667b7f7cf7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.80285717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.810830903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.811442718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.838902017Z" level=info msg="Created container ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=7a58a280-570d-4cd0-b99b-ef667b7f7cf7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.839656972Z" level=info msg="Starting container: ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1" id=b5d4d83a-72c6-4657-a28b-e00142af6f54 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.842182969Z" level=info msg="Started container" PID=1765 containerID=ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper id=b5d4d83a-72c6-4657-a28b-e00142af6f54 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3ffa6fdd156f9691791cff81f46f19b11ba77f2d5baf835b7207d1239ca1011
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.908208809Z" level=info msg="Removing container: 9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2" id=266d1a61-93c3-490d-96ea-95d7a39607ff name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.918829932Z" level=info msg="Removed container 9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=266d1a61-93c3-490d-96ea-95d7a39607ff name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.771406974Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e62edc76-e9b6-46ac-a0e6-9c7f8ea13bf6 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.772214696Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3691bbff-48b5-4ff6-a098-dc5e268920d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.7733535Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=d2e0af4c-a381-42b7-a47f-f9a162185239 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.773514735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.779266755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.779720083Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.807004316Z" level=info msg="Created container 8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=d2e0af4c-a381-42b7-a47f-f9a162185239 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.807655713Z" level=info msg="Starting container: 8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c" id=96ca145f-7707-448d-9fb8-78aec0313499 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.809395384Z" level=info msg="Started container" PID=1800 containerID=8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper id=96ca145f-7707-448d-9fb8-78aec0313499 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3ffa6fdd156f9691791cff81f46f19b11ba77f2d5baf835b7207d1239ca1011
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.976906407Z" level=info msg="Removing container: ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1" id=73a1b58c-78f7-4764-9203-010e384dc52a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.986897262Z" level=info msg="Removed container ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=73a1b58c-78f7-4764-9203-010e384dc52a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8bca0d942824a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   a3ffa6fdd156f       dashboard-metrics-scraper-6ffb444bf9-692mj   kubernetes-dashboard
	a243b7b9c5b09       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           38 seconds ago      Running             storage-provisioner         2                   9d8b57a32c1d3       storage-provisioner                          kube-system
	ba17aac76ebe9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   2047f771bfddd       kubernetes-dashboard-855c9754f9-lfssc        kubernetes-dashboard
	9447a45980928       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         1                   9d8b57a32c1d3       storage-provisioner                          kube-system
	1507cb3b17a78       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   f12aaa9c21f0e       busybox                                      default
	cbd067eb16796       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   187c4fc087f9b       kube-proxy-gf725                             kube-system
	51e1b51f1e8d7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   dc5d84442778b       coredns-66bc5c9577-msbj9                     kube-system
	36caab8434beb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   481ec4596d4e7       kindnet-dtjql                                kube-system
	2f44a2722d5cc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   677e4f0a1172a       kube-apiserver-embed-certs-829976            kube-system
	45a7ab4d45789       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   02d964aa0a0f0       etcd-embed-certs-829976                      kube-system
	9ebb5d429db0f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   1fa36f3424228       kube-scheduler-embed-certs-829976            kube-system
	e617c18783204       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   c9a0301067956       kube-controller-manager-embed-certs-829976   kube-system
	
	
	==> coredns [51e1b51f1e8d7456d9abb387421db6e13e287cb56c376344f144076a8be30b1b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56781 - 49169 "HINFO IN 1622635663925324999.5082171501692183482. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034279357s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-829976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-829976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=embed-certs-829976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_38_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:38:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-829976
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:40:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:40:13 +0000   Mon, 27 Oct 2025 22:38:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:40:13 +0000   Mon, 27 Oct 2025 22:38:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:40:13 +0000   Mon, 27 Oct 2025 22:38:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:40:13 +0000   Mon, 27 Oct 2025 22:38:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-829976
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                3b5f0575-3075-4eff-8d0c-0490f489999a
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-msbj9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-829976                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-dtjql                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-829976             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-829976    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-gf725                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-829976             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-692mj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lfssc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-829976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-829976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-829976 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-829976 event: Registered Node embed-certs-829976 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-829976 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-829976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-829976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-829976 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-829976 event: Registered Node embed-certs-829976 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [45a7ab4d457895149bd74409ca1cf2067d30d698e93850bc8e3ded4ce106bbab] <==
	{"level":"info","ts":"2025-10-27T22:39:42.798034Z","caller":"traceutil/trace.go:172","msg":"trace[1573754819] transaction","detail":"{read_only:false; number_of_response:0; response_revision:439; }","duration":"136.050047ms","start":"2025-10-27T22:39:42.661964Z","end":"2025-10-27T22:39:42.798014Z","steps":["trace[1573754819] 'process raft request'  (duration: 135.958329ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:42.913410Z","caller":"traceutil/trace.go:172","msg":"trace[1480240731] linearizableReadLoop","detail":"{readStateIndex:462; appliedIndex:462; }","duration":"115.510307ms","start":"2025-10-27T22:39:42.797872Z","end":"2025-10-27T22:39:42.913382Z","steps":["trace[1480240731] 'read index received'  (duration: 115.505098ms)","trace[1480240731] 'applied index is now lower than readState.Index'  (duration: 4.27µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T22:39:42.913565Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.355348ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:1145"}
	{"level":"warn","ts":"2025-10-27T22:39:42.913537Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.585539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-829976\" limit:1 ","response":"range_response_count:1 size:5850"}
	{"level":"info","ts":"2025-10-27T22:39:42.913600Z","caller":"traceutil/trace.go:172","msg":"trace[486996741] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:1; response_revision:439; }","duration":"176.401611ms","start":"2025-10-27T22:39:42.737189Z","end":"2025-10-27T22:39:42.913590Z","steps":["trace[486996741] 'agreement among raft nodes before linearized reading'  (duration: 176.284169ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:42.913609Z","caller":"traceutil/trace.go:172","msg":"trace[120467976] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-829976; range_end:; response_count:1; response_revision:439; }","duration":"170.676258ms","start":"2025-10-27T22:39:42.742921Z","end":"2025-10-27T22:39:42.913598Z","steps":["trace[120467976] 'agreement among raft nodes before linearized reading'  (duration: 170.484881ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:42.913615Z","caller":"traceutil/trace.go:172","msg":"trace[591147520] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"245.528028ms","start":"2025-10-27T22:39:42.668069Z","end":"2025-10-27T22:39:42.913597Z","steps":["trace[591147520] 'process raft request'  (duration: 245.343279ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:42.937387Z","caller":"traceutil/trace.go:172","msg":"trace[1623333237] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"135.999727ms","start":"2025-10-27T22:39:42.801369Z","end":"2025-10-27T22:39:42.937369Z","steps":["trace[1623333237] 'process raft request'  (duration: 135.920507ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:39:42.937494Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.508907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-829976\" limit:1 ","response":"range_response_count:1 size:4807"}
	{"level":"info","ts":"2025-10-27T22:39:42.937542Z","caller":"traceutil/trace.go:172","msg":"trace[759877859] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-829976; range_end:; response_count:1; response_revision:441; }","duration":"138.569878ms","start":"2025-10-27T22:39:42.798963Z","end":"2025-10-27T22:39:42.937533Z","steps":["trace[759877859] 'agreement among raft nodes before linearized reading'  (duration: 138.440303ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:39:43.161661Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.218377ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" limit:1 ","response":"range_response_count:1 size:1210"}
	{"level":"info","ts":"2025-10-27T22:39:43.161731Z","caller":"traceutil/trace.go:172","msg":"trace[1595167566] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:443; }","duration":"116.30819ms","start":"2025-10-27T22:39:43.045410Z","end":"2025-10-27T22:39:43.161718Z","steps":["trace[1595167566] 'agreement among raft nodes before linearized reading'  (duration: 95.508249ms)","trace[1595167566] 'range keys from in-memory index tree'  (duration: 20.613693ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:39:43.161732Z","caller":"traceutil/trace.go:172","msg":"trace[500861968] transaction","detail":"{read_only:false; number_of_response:0; response_revision:443; }","duration":"122.018054ms","start":"2025-10-27T22:39:43.039661Z","end":"2025-10-27T22:39:43.161679Z","steps":["trace[500861968] 'process raft request'  (duration: 101.360338ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:43.161763Z","caller":"traceutil/trace.go:172","msg":"trace[258552786] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"121.501711ms","start":"2025-10-27T22:39:43.040242Z","end":"2025-10-27T22:39:43.161743Z","steps":["trace[258552786] 'process raft request'  (duration: 121.35371ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:39:43.386497Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.121035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:897"}
	{"level":"info","ts":"2025-10-27T22:39:43.386565Z","caller":"traceutil/trace.go:172","msg":"trace[456840584] range","detail":"{range_begin:/registry/namespaces/kubernetes-dashboard; range_end:; response_count:1; response_revision:446; }","duration":"122.194184ms","start":"2025-10-27T22:39:43.264358Z","end":"2025-10-27T22:39:43.386552Z","steps":["trace[456840584] 'range keys from in-memory index tree'  (duration: 121.955994ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:39:43.386497Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.10118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-829976\" limit:1 ","response":"range_response_count:1 size:4807"}
	{"level":"info","ts":"2025-10-27T22:39:43.386801Z","caller":"traceutil/trace.go:172","msg":"trace[1069850366] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-829976; range_end:; response_count:1; response_revision:446; }","duration":"122.377818ms","start":"2025-10-27T22:39:43.264364Z","end":"2025-10-27T22:39:43.386742Z","steps":["trace[1069850366] 'range keys from in-memory index tree'  (duration: 121.91151ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:39:43.570830Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.493568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:monitoring\" limit:1 ","response":"range_response_count:1 size:658"}
	{"level":"info","ts":"2025-10-27T22:39:43.570903Z","caller":"traceutil/trace.go:172","msg":"trace[710008890] range","detail":"{range_begin:/registry/clusterroles/system:monitoring; range_end:; response_count:1; response_revision:448; }","duration":"136.581743ms","start":"2025-10-27T22:39:43.434305Z","end":"2025-10-27T22:39:43.570887Z","steps":["trace[710008890] 'agreement among raft nodes before linearized reading'  (duration: 73.629594ms)","trace[710008890] 'range keys from in-memory index tree'  (duration: 62.726974ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T22:39:43.571078Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.679144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T22:39:43.571140Z","caller":"traceutil/trace.go:172","msg":"trace[1442354172] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:450; }","duration":"117.741319ms","start":"2025-10-27T22:39:43.453377Z","end":"2025-10-27T22:39:43.571118Z","steps":["trace[1442354172] 'agreement among raft nodes before linearized reading'  (duration: 117.643304ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:43.571230Z","caller":"traceutil/trace.go:172","msg":"trace[33738410] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"134.984573ms","start":"2025-10-27T22:39:43.436228Z","end":"2025-10-27T22:39:43.571212Z","steps":["trace[33738410] 'process raft request'  (duration: 134.626481ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:43.571260Z","caller":"traceutil/trace.go:172","msg":"trace[1607143507] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"138.318158ms","start":"2025-10-27T22:39:43.432931Z","end":"2025-10-27T22:39:43.571249Z","steps":["trace[1607143507] 'process raft request'  (duration: 75.091973ms)","trace[1607143507] 'compare'  (duration: 62.694886ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:39:52.070385Z","caller":"traceutil/trace.go:172","msg":"trace[119835233] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"118.126521ms","start":"2025-10-27T22:39:51.952235Z","end":"2025-10-27T22:39:52.070361Z","steps":["trace[119835233] 'process raft request'  (duration: 57.853558ms)","trace[119835233] 'compare'  (duration: 60.150298ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:40:37 up  2:22,  0 user,  load average: 4.27, 3.20, 2.92
	Linux embed-certs-829976 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36caab8434bebf2193c8f305a5d81c1aa34986386d7338a3bbd3c750f1b6e6db] <==
	I1027 22:39:43.904631       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:39:43.904693       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:39:43.904806       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:39:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:39:44.202390       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:39:44.202416       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:39:44.202434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:39:44.202585       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 22:39:44.202871       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 22:39:44.299391       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 22:39:44.299546       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 22:39:44.299762       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 22:39:45.802619       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:39:45.802650       1 metrics.go:72] Registering metrics
	I1027 22:39:45.802718       1 controller.go:711] "Syncing nftables rules"
	I1027 22:39:54.203146       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:39:54.203187       1 main.go:301] handling current node
	I1027 22:40:04.205083       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:40:04.205154       1 main.go:301] handling current node
	I1027 22:40:14.203122       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:40:14.203179       1 main.go:301] handling current node
	I1027 22:40:24.205060       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:40:24.205149       1 main.go:301] handling current node
	I1027 22:40:34.207612       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:40:34.207654       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2f44a2722d5ccd7616df1090c6bb0dbee4aa51ec06009ab3a0c5b8d4976586ea] <==
	I1027 22:39:42.450506       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 22:39:42.459737       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:39:42.464856       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 22:39:42.482356       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 22:39:42.482476       1 policy_source.go:240] refreshing policies
	I1027 22:39:42.484093       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 22:39:42.484222       1 aggregator.go:171] initial CRD sync complete...
	I1027 22:39:42.484265       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 22:39:42.484290       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:39:42.484298       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:39:42.488344       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:39:42.800879       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:39:42.937997       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:39:42.938111       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:39:42.938145       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:39:43.390763       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:39:43.432421       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:39:43.727408       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:39:43.739902       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:39:43.854912       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.220.243"}
	I1027 22:39:43.876618       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.67.84"}
	I1027 22:39:45.995483       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:39:45.995539       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:39:46.196145       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:39:46.346986       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e617c18783204a4f1e575bdec7825512002bad31cb3b04208481ca9f4c563564] <==
	I1027 22:39:45.792880       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:39:45.792907       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:39:45.793064       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:39:45.793169       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-829976"
	I1027 22:39:45.793242       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 22:39:45.793674       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 22:39:45.793803       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 22:39:45.795080       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 22:39:45.797359       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 22:39:45.798194       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 22:39:45.798255       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:39:45.798280       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:39:45.798285       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:39:45.798290       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:39:45.799716       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:39:45.800904       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 22:39:45.800926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 22:39:45.802111       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 22:39:45.802133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 22:39:45.807337       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 22:39:45.807359       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 22:39:45.809631       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 22:39:45.809736       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 22:39:45.810927       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:39:45.815123       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cbd067eb1679660eec922b92c264ea4b4019dcf99c1f78856e648b52f27cb061] <==
	I1027 22:39:43.857085       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:39:43.928385       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:39:44.029447       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:39:44.029483       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 22:39:44.029574       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:39:44.141507       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:39:44.141585       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:39:44.148806       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:39:44.151213       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:39:44.151239       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:39:44.154283       1 config.go:200] "Starting service config controller"
	I1027 22:39:44.154361       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:39:44.154757       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:39:44.155317       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:39:44.154771       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:39:44.155379       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:39:44.154894       1 config.go:309] "Starting node config controller"
	I1027 22:39:44.155446       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:39:44.155471       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:39:44.256187       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:39:44.256174       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:39:44.256220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9ebb5d429db0f5d2cfac0c88b414dd785a0b2d57b9fcfeb926197b670710530b] <==
	I1027 22:39:41.899143       1 serving.go:386] Generated self-signed cert in-memory
	I1027 22:39:43.037408       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:39:43.037442       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:39:43.093025       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:39:43.093021       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 22:39:43.093079       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 22:39:43.093031       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:39:43.093199       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:39:43.093081       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:39:43.093498       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:39:43.093808       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:39:43.193635       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:39:43.193687       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 22:39:43.193728       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:39:46 embed-certs-829976 kubelet[710]: I1027 22:39:46.512131     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl9xr\" (UniqueName: \"kubernetes.io/projected/9b2e681b-9a25-4761-a5b6-5c3800ecbc39-kube-api-access-gl9xr\") pod \"kubernetes-dashboard-855c9754f9-lfssc\" (UID: \"9b2e681b-9a25-4761-a5b6-5c3800ecbc39\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lfssc"
	Oct 27 22:39:46 embed-certs-829976 kubelet[710]: I1027 22:39:46.512160     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9b2e681b-9a25-4761-a5b6-5c3800ecbc39-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-lfssc\" (UID: \"9b2e681b-9a25-4761-a5b6-5c3800ecbc39\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lfssc"
	Oct 27 22:39:49 embed-certs-829976 kubelet[710]: I1027 22:39:49.908308     710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 22:39:51 embed-certs-829976 kubelet[710]: I1027 22:39:51.901036     710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lfssc" podStartSLOduration=1.6162784430000001 podStartE2EDuration="5.901012419s" podCreationTimestamp="2025-10-27 22:39:46 +0000 UTC" firstStartedPulling="2025-10-27 22:39:46.75884191 +0000 UTC m=+7.088816919" lastFinishedPulling="2025-10-27 22:39:51.043575886 +0000 UTC m=+11.373550895" observedRunningTime="2025-10-27 22:39:51.900886686 +0000 UTC m=+12.230861713" watchObservedRunningTime="2025-10-27 22:39:51.901012419 +0000 UTC m=+12.230987467"
	Oct 27 22:39:53 embed-certs-829976 kubelet[710]: I1027 22:39:53.867109     710 scope.go:117] "RemoveContainer" containerID="a7e3857bc1af61fb0d46132ecb316ba3c8e9f4d0973c3bb973d3ebc33409d93e"
	Oct 27 22:39:54 embed-certs-829976 kubelet[710]: I1027 22:39:54.872071     710 scope.go:117] "RemoveContainer" containerID="a7e3857bc1af61fb0d46132ecb316ba3c8e9f4d0973c3bb973d3ebc33409d93e"
	Oct 27 22:39:54 embed-certs-829976 kubelet[710]: I1027 22:39:54.872234     710 scope.go:117] "RemoveContainer" containerID="9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2"
	Oct 27 22:39:54 embed-certs-829976 kubelet[710]: E1027 22:39:54.872417     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-692mj_kubernetes-dashboard(a1761b09-0d81-4e8f-89c3-743f9a6b0e1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj" podUID="a1761b09-0d81-4e8f-89c3-743f9a6b0e1c"
	Oct 27 22:39:55 embed-certs-829976 kubelet[710]: I1027 22:39:55.877447     710 scope.go:117] "RemoveContainer" containerID="9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2"
	Oct 27 22:39:55 embed-certs-829976 kubelet[710]: E1027 22:39:55.877672     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-692mj_kubernetes-dashboard(a1761b09-0d81-4e8f-89c3-743f9a6b0e1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj" podUID="a1761b09-0d81-4e8f-89c3-743f9a6b0e1c"
	Oct 27 22:39:58 embed-certs-829976 kubelet[710]: I1027 22:39:58.769412     710 scope.go:117] "RemoveContainer" containerID="9447a459809287a6975180679ff62303ef8cb2896cb8860c8aeb82b1b5e8bc3c"
	Oct 27 22:40:04 embed-certs-829976 kubelet[710]: I1027 22:40:04.795315     710 scope.go:117] "RemoveContainer" containerID="9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2"
	Oct 27 22:40:04 embed-certs-829976 kubelet[710]: I1027 22:40:04.906800     710 scope.go:117] "RemoveContainer" containerID="9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2"
	Oct 27 22:40:04 embed-certs-829976 kubelet[710]: I1027 22:40:04.907087     710 scope.go:117] "RemoveContainer" containerID="ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1"
	Oct 27 22:40:04 embed-certs-829976 kubelet[710]: E1027 22:40:04.907280     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-692mj_kubernetes-dashboard(a1761b09-0d81-4e8f-89c3-743f9a6b0e1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj" podUID="a1761b09-0d81-4e8f-89c3-743f9a6b0e1c"
	Oct 27 22:40:14 embed-certs-829976 kubelet[710]: I1027 22:40:14.795527     710 scope.go:117] "RemoveContainer" containerID="ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1"
	Oct 27 22:40:14 embed-certs-829976 kubelet[710]: E1027 22:40:14.795722     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-692mj_kubernetes-dashboard(a1761b09-0d81-4e8f-89c3-743f9a6b0e1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj" podUID="a1761b09-0d81-4e8f-89c3-743f9a6b0e1c"
	Oct 27 22:40:27 embed-certs-829976 kubelet[710]: I1027 22:40:27.770984     710 scope.go:117] "RemoveContainer" containerID="ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1"
	Oct 27 22:40:27 embed-certs-829976 kubelet[710]: I1027 22:40:27.975547     710 scope.go:117] "RemoveContainer" containerID="ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1"
	Oct 27 22:40:27 embed-certs-829976 kubelet[710]: I1027 22:40:27.975745     710 scope.go:117] "RemoveContainer" containerID="8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c"
	Oct 27 22:40:27 embed-certs-829976 kubelet[710]: E1027 22:40:27.975968     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-692mj_kubernetes-dashboard(a1761b09-0d81-4e8f-89c3-743f9a6b0e1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj" podUID="a1761b09-0d81-4e8f-89c3-743f9a6b0e1c"
	Oct 27 22:40:33 embed-certs-829976 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:40:33 embed-certs-829976 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:40:33 embed-certs-829976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 22:40:33 embed-certs-829976 systemd[1]: kubelet.service: Consumed 1.809s CPU time.
	
	
	==> kubernetes-dashboard [ba17aac76ebe9be83a75950a4ef6b7b6315fa93827de262d9593d4e97bbdf936] <==
	2025/10/27 22:39:51 Using namespace: kubernetes-dashboard
	2025/10/27 22:39:51 Using in-cluster config to connect to apiserver
	2025/10/27 22:39:51 Using secret token for csrf signing
	2025/10/27 22:39:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 22:39:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 22:39:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 22:39:51 Generating JWE encryption key
	2025/10/27 22:39:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 22:39:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 22:39:51 Initializing JWE encryption key from synchronized object
	2025/10/27 22:39:51 Creating in-cluster Sidecar client
	2025/10/27 22:39:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:39:51 Serving insecurely on HTTP port: 9090
	2025/10/27 22:40:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:39:51 Starting overwatch
	
	
	==> storage-provisioner [9447a459809287a6975180679ff62303ef8cb2896cb8860c8aeb82b1b5e8bc3c] <==
	I1027 22:39:43.896703       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 22:39:43.898923       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05] <==
	I1027 22:40:16.238338       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7c5079ea-db67-4e50-8ac0-354c5782f492", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-829976_197dc557-f0f6-4608-99c9-5a723663949b became leader
	I1027 22:40:16.238405       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-829976_197dc557-f0f6-4608-99c9-5a723663949b!
	W1027 22:40:16.241010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:16.247361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 22:40:16.339393       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-829976_197dc557-f0f6-4608-99c9-5a723663949b!
	W1027 22:40:18.250824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:18.255883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:20.260622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:20.266570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:22.270362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:22.275501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:24.278709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:24.284124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:26.288008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:26.292155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:28.295929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:28.300836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:30.304715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:30.308900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:32.311967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:32.315821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:34.318716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:34.323141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:36.326483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:36.330293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-829976 -n embed-certs-829976
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-829976 -n embed-certs-829976: exit status 2 (349.163507ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-829976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-829976
helpers_test.go:243: (dbg) docker inspect embed-certs-829976:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4",
	        "Created": "2025-10-27T22:38:24.135878096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:39:31.028103078Z",
	            "FinishedAt": "2025-10-27T22:39:29.983394918Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/hosts",
	        "LogPath": "/var/lib/docker/containers/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4/faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4-json.log",
	        "Name": "/embed-certs-829976",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-829976:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-829976",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "faeaf04da269542545a74b2f266b3535aaee7afac782a8ebacfb6391ffdb5cd4",
	                "LowerDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da582491fb25482b9e52792c56eb955fe0fa2e1540c98c078e55757389126f7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-829976",
	                "Source": "/var/lib/docker/volumes/embed-certs-829976/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-829976",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-829976",
	                "name.minikube.sigs.k8s.io": "embed-certs-829976",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44e37758f1abe5f5a5c831f5f14a8dc85d9323346b7b61ced077ed068a66e5c7",
	            "SandboxKey": "/var/run/docker/netns/44e37758f1ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-829976": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:26:0b:40:d3:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19326983879b440afd91ddad1f1a29b86b26ac185f059a173d6110952f20d348",
	                    "EndpointID": "a1c0233c9157dcb94206d0f164b4c48815960e9dac53b07132a982c3b0a5e539",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-829976",
	                        "faeaf04da269"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829976 -n embed-certs-829976
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829976 -n embed-certs-829976: exit status 2 (377.05243ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-829976 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-829976 logs -n 25: (1.237084118s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ delete  │ -p kubernetes-upgrade-695499                                                                                                                                                                                                                  │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ delete  │ -p no-preload-188814                                                                                                                                                                                                                          │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ delete  │ -p no-preload-188814                                                                                                                                                                                                                          │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p auto-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-927034 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-290425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ stop    │ -p newest-cni-290425 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927034 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 pgrep -a kubelet                                                                                                                                                                                                               │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable dashboard -p newest-cni-290425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ image   │ embed-certs-829976 image list --format=json                                                                                                                                                                                                   │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ pause   │ -p embed-certs-829976 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ image   │ newest-cni-290425 image list --format=json                                                                                                                                                                                                    │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ pause   │ -p newest-cni-290425 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo cat /etc/nsswitch.conf                                                                                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/hosts                                                                                                                                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/resolv.conf                                                                                                                                                                                                      │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo crictl pods                                                                                                                                                                                                               │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo crictl ps --all                                                                                                                                                                                                           │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:40:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:40:24.438209  756848 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:40:24.438329  756848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:24.438340  756848 out.go:374] Setting ErrFile to fd 2...
	I1027 22:40:24.438345  756848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:24.438673  756848 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:40:24.439297  756848 out.go:368] Setting JSON to false
	I1027 22:40:24.440841  756848 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8563,"bootTime":1761596261,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:40:24.440961  756848 start.go:143] virtualization: kvm guest
	I1027 22:40:24.442921  756848 out.go:179] * [newest-cni-290425] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:40:24.445592  756848 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:40:24.445629  756848 notify.go:221] Checking for updates...
	I1027 22:40:24.448124  756848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:40:24.449565  756848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:24.451090  756848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:40:24.452160  756848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:40:24.456462  756848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:40:24.458338  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:24.459094  756848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:40:24.488803  756848 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:40:24.488892  756848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:24.557828  756848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:40:24.546418647 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:24.557998  756848 docker.go:318] overlay module found
	I1027 22:40:24.559462  756848 out.go:179] * Using the docker driver based on existing profile
	I1027 22:40:24.560558  756848 start.go:307] selected driver: docker
	I1027 22:40:24.560578  756848 start.go:928] validating driver "docker" against &{Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:24.560718  756848 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:40:24.561602  756848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:24.632177  756848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:40:24.620016626 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:24.632569  756848 start_flags.go:1010] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 22:40:24.632600  756848 cni.go:84] Creating CNI manager for ""
	I1027 22:40:24.632673  756848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:40:24.632732  756848 start.go:351] cluster config:
	{Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:24.634382  756848 out.go:179] * Starting "newest-cni-290425" primary control-plane node in "newest-cni-290425" cluster
	I1027 22:40:24.635369  756848 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:40:24.636382  756848 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:40:24.637272  756848 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:24.637317  756848 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:40:24.637329  756848 cache.go:59] Caching tarball of preloaded images
	I1027 22:40:24.637336  756848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:40:24.637435  756848 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:40:24.637450  756848 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:40:24.637576  756848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/config.json ...
	I1027 22:40:24.659489  756848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:40:24.659511  756848 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:40:24.659527  756848 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:40:24.659550  756848 start.go:360] acquireMachinesLock for newest-cni-290425: {Name:mk4e0aa51aaa1a604f2ac1e14d4e9ad4994a6e85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:40:24.659621  756848 start.go:364] duration metric: took 41.13µs to acquireMachinesLock for "newest-cni-290425"
	I1027 22:40:24.659640  756848 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:40:24.659645  756848 fix.go:55] fixHost starting: 
	I1027 22:40:24.659871  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:24.679073  756848 fix.go:113] recreateIfNeeded on newest-cni-290425: state=Stopped err=<nil>
	W1027 22:40:24.679130  756848 fix.go:139] unexpected machine state, will restart: <nil>
	W1027 22:40:24.188623  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:26.687852  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:24.681022  756848 out.go:252] * Restarting existing docker container for "newest-cni-290425" ...
	I1027 22:40:24.681102  756848 cli_runner.go:164] Run: docker start newest-cni-290425
	I1027 22:40:24.992255  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:25.014514  756848 kic.go:430] container "newest-cni-290425" state is running.
	I1027 22:40:25.015046  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:25.038668  756848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/config.json ...
	I1027 22:40:25.038987  756848 machine.go:94] provisionDockerMachine start ...
	I1027 22:40:25.039099  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:25.061826  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:25.062260  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:25.062285  756848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:40:25.063188  756848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45146->127.0.0.1:33103: read: connection reset by peer
	I1027 22:40:28.204458  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:40:28.204492  756848 ubuntu.go:182] provisioning hostname "newest-cni-290425"
	I1027 22:40:28.204559  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.222514  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.222737  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.222759  756848 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-290425 && echo "newest-cni-290425" | sudo tee /etc/hostname
	I1027 22:40:28.375236  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:40:28.375318  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.392770  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.393063  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.393082  756848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-290425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-290425/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-290425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:40:28.533683  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:40:28.533712  756848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:40:28.533740  756848 ubuntu.go:190] setting up certificates
	I1027 22:40:28.533756  756848 provision.go:84] configureAuth start
	I1027 22:40:28.533832  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:28.551092  756848 provision.go:143] copyHostCerts
	I1027 22:40:28.551157  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:40:28.551183  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:40:28.551262  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:40:28.551424  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:40:28.551439  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:40:28.551489  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:40:28.551578  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:40:28.551589  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:40:28.551627  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:40:28.551720  756848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-290425 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-290425]
	I1027 22:40:28.786512  756848 provision.go:177] copyRemoteCerts
	I1027 22:40:28.786589  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:40:28.786645  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.804351  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:28.905399  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:40:28.923296  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:40:28.940336  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:40:28.958758  756848 provision.go:87] duration metric: took 424.98667ms to configureAuth
	I1027 22:40:28.958786  756848 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:40:28.959034  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:28.959153  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.977021  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.977337  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.977362  756848 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:40:29.254601  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:40:29.254630  756848 machine.go:97] duration metric: took 4.215620835s to provisionDockerMachine
	I1027 22:40:29.254645  756848 start.go:293] postStartSetup for "newest-cni-290425" (driver="docker")
	I1027 22:40:29.254658  756848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:40:29.254744  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:40:29.254799  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.272656  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.373656  756848 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:40:29.377346  756848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:40:29.377381  756848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:40:29.377394  756848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:40:29.377439  756848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:40:29.377507  756848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:40:29.377598  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:40:29.385749  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:29.403339  756848 start.go:296] duration metric: took 148.678819ms for postStartSetup
	I1027 22:40:29.403416  756848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:40:29.403473  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.421865  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.520183  756848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:40:29.524936  756848 fix.go:57] duration metric: took 4.865280599s for fixHost
	I1027 22:40:29.524989  756848 start.go:83] releasing machines lock for "newest-cni-290425", held for 4.865355811s
	I1027 22:40:29.525055  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:29.542221  756848 ssh_runner.go:195] Run: cat /version.json
	I1027 22:40:29.542269  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.542325  756848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:40:29.542380  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.560078  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.560376  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.658503  756848 ssh_runner.go:195] Run: systemctl --version
	I1027 22:40:29.714758  756848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:40:29.751819  756848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:40:29.757527  756848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:40:29.757592  756848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:40:29.766082  756848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:40:29.766107  756848 start.go:496] detecting cgroup driver to use...
	I1027 22:40:29.766144  756848 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:40:29.766201  756848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:40:29.782220  756848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:40:29.795704  756848 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:40:29.795756  756848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:40:29.811814  756848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:40:29.824770  756848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:40:29.911398  756848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:40:30.002621  756848 docker.go:234] disabling docker service ...
	I1027 22:40:30.002705  756848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:40:30.018425  756848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:40:30.032066  756848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:40:30.126259  756848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:40:30.224136  756848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:40:30.240695  756848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:40:30.262231  756848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:40:30.262309  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.272017  756848 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:40:30.272077  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.281097  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.290459  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.299765  756848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:40:30.308783  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.318037  756848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.326660  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.335545  756848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:40:30.343816  756848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:40:30.351923  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:30.438807  756848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:40:30.541588  756848 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:40:30.541647  756848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:40:30.545709  756848 start.go:564] Will wait 60s for crictl version
	I1027 22:40:30.545763  756848 ssh_runner.go:195] Run: which crictl
	I1027 22:40:30.549390  756848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:40:30.574840  756848 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:40:30.574912  756848 ssh_runner.go:195] Run: crio --version
	I1027 22:40:30.603907  756848 ssh_runner.go:195] Run: crio --version
	I1027 22:40:30.635251  756848 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:40:30.636309  756848 cli_runner.go:164] Run: docker network inspect newest-cni-290425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:40:30.652517  756848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 22:40:30.656856  756848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:30.668683  756848 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1027 22:40:30.669554  756848 kubeadm.go:884] updating cluster {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:40:30.669731  756848 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:30.669822  756848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:30.704544  756848 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:30.704566  756848 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:40:30.704611  756848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:30.734075  756848 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:30.734098  756848 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:40:30.734106  756848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 22:40:30.734202  756848 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-290425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:40:30.734273  756848 ssh_runner.go:195] Run: crio config
	I1027 22:40:30.780046  756848 cni.go:84] Creating CNI manager for ""
	I1027 22:40:30.780067  756848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:40:30.780090  756848 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 22:40:30.780113  756848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-290425 NodeName:newest-cni-290425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:40:30.780240  756848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-290425"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:40:30.780304  756848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:40:30.788709  756848 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:40:30.788776  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:40:30.796691  756848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 22:40:30.809324  756848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:40:30.821977  756848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1027 22:40:30.834850  756848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:40:30.838629  756848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:30.848598  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:30.930756  756848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:30.960505  756848 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425 for IP: 192.168.76.2
	I1027 22:40:30.960526  756848 certs.go:195] generating shared ca certs ...
	I1027 22:40:30.960549  756848 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:30.960716  756848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:40:30.960760  756848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:40:30.960770  756848 certs.go:257] generating profile certs ...
	I1027 22:40:30.960854  756848 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.key
	I1027 22:40:30.960928  756848 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67
	I1027 22:40:30.961028  756848 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key
	I1027 22:40:30.961171  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:40:30.961204  756848 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:40:30.961217  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:40:30.961254  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:40:30.961289  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:40:30.961318  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:40:30.961382  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:30.962311  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:40:30.982191  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:40:31.003485  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:40:31.024750  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:40:31.051339  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 22:40:31.070810  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:40:31.089035  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:40:31.107252  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:40:31.124793  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:40:31.142653  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:40:31.162599  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:40:31.180139  756848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:40:31.194578  756848 ssh_runner.go:195] Run: openssl version
	I1027 22:40:31.200775  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:40:31.210145  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.214047  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.214105  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.252428  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:40:31.261073  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:40:31.270127  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.274120  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.274183  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.309111  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:40:31.317698  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:40:31.326420  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.330243  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.330307  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.365724  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:40:31.374331  756848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:40:31.378340  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:40:31.413065  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:40:31.448812  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:40:31.492414  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:40:31.536913  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:40:31.581567  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:40:31.637412  756848 kubeadm.go:401] StartCluster: {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:31.637550  756848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:40:31.637610  756848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:40:31.673955  756848 cri.go:89] found id: "5c6f16a2765ac4bdb8db042d29939ff67bdd1db836137d98bd170e7d1e41a727"
	I1027 22:40:31.673983  756848 cri.go:89] found id: "e2e676795ba20aae505a22108af3c33b27b2039e426adf854bbcfe4ed785f295"
	I1027 22:40:31.673988  756848 cri.go:89] found id: "bcada78a58b8a8ca59f0601dbbe5b52ebef3f5b2e055602ea90f951529aca61f"
	I1027 22:40:31.673993  756848 cri.go:89] found id: "54cf126c5f01241f27207f3fdf1efb544769da6c6c0566f35c8387449126358d"
	I1027 22:40:31.673996  756848 cri.go:89] found id: ""
	I1027 22:40:31.674047  756848 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:40:31.687812  756848 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:31Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:40:31.687887  756848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:40:31.697214  756848 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:40:31.697231  756848 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:40:31.697274  756848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:40:31.705188  756848 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:40:31.706218  756848 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-290425" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:31.706815  756848 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-482142/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-290425" cluster setting kubeconfig missing "newest-cni-290425" context setting]
	I1027 22:40:31.708077  756848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.710194  756848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:40:31.719725  756848 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 22:40:31.719756  756848 kubeadm.go:602] duration metric: took 22.519377ms to restartPrimaryControlPlane
	I1027 22:40:31.719767  756848 kubeadm.go:403] duration metric: took 82.367104ms to StartCluster
	I1027 22:40:31.719783  756848 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.719848  756848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:31.722417  756848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.722691  756848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:40:31.722773  756848 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:40:31.722874  756848 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-290425"
	I1027 22:40:31.722893  756848 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-290425"
	W1027 22:40:31.722902  756848 addons.go:247] addon storage-provisioner should already be in state true
	I1027 22:40:31.722931  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.722937  756848 addons.go:69] Setting dashboard=true in profile "newest-cni-290425"
	I1027 22:40:31.722973  756848 addons.go:238] Setting addon dashboard=true in "newest-cni-290425"
	W1027 22:40:31.722982  756848 addons.go:247] addon dashboard should already be in state true
	I1027 22:40:31.722987  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:31.723014  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.723047  756848 addons.go:69] Setting default-storageclass=true in profile "newest-cni-290425"
	I1027 22:40:31.723064  756848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-290425"
	I1027 22:40:31.723353  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.723550  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.723800  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.725730  756848 out.go:179] * Verifying Kubernetes components...
	I1027 22:40:31.726934  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:31.749813  756848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:40:31.749929  756848 addons.go:238] Setting addon default-storageclass=true in "newest-cni-290425"
	W1027 22:40:31.749966  756848 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:40:31.750012  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.750560  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.750761  756848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 22:40:31.750784  756848 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:40:31.750805  756848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:40:31.750863  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.756414  756848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1027 22:40:29.188109  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:31.188378  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:31.757286  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 22:40:31.757307  756848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 22:40:31.757368  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.788482  756848 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:40:31.788523  756848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:40:31.788585  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.788473  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.791269  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.812300  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.876427  756848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:31.890087  756848 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:40:31.890171  756848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:40:31.904610  756848 api_server.go:72] duration metric: took 181.883596ms to wait for apiserver process to appear ...
	I1027 22:40:31.904641  756848 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:40:31.904675  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:31.911250  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:40:31.913745  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 22:40:31.913771  756848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 22:40:31.928922  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 22:40:31.928985  756848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 22:40:31.937602  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:40:31.944700  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 22:40:31.944729  756848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 22:40:31.965934  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 22:40:31.965991  756848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 22:40:31.983504  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 22:40:31.983534  756848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 22:40:32.000875  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 22:40:32.000897  756848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 22:40:32.015058  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 22:40:32.015175  756848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 22:40:32.028828  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 22:40:32.028864  756848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 22:40:32.042288  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:40:32.042313  756848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 22:40:32.055615  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:40:33.250603  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:40:33.250644  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:40:33.250661  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.259803  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:40:33.259841  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:40:33.405243  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.410997  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:33.411027  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:33.838488  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.927200802s)
	I1027 22:40:33.838547  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.900910209s)
	I1027 22:40:33.838682  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.783022367s)
	I1027 22:40:33.840157  756848 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-290425 addons enable metrics-server
	
	I1027 22:40:33.849803  756848 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 22:40:33.851036  756848 addons.go:514] duration metric: took 2.128273879s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 22:40:33.905759  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.909856  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:33.909880  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:34.405178  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:34.409922  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:34.409969  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:34.905382  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:34.910198  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 22:40:34.911208  756848 api_server.go:141] control plane version: v1.34.1
	I1027 22:40:34.911251  756848 api_server.go:131] duration metric: took 3.006601962s to wait for apiserver health ...
	I1027 22:40:34.911260  756848 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:40:34.915094  756848 system_pods.go:59] 8 kube-system pods found
	I1027 22:40:34.915146  756848 system_pods.go:61] "coredns-66bc5c9577-hmtz5" [d0253fb1-e66b-448e-8b6d-e9882120ffd2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:40:34.915160  756848 system_pods.go:61] "etcd-newest-cni-290425" [fa08a886-4040-46e0-9e58-975345432c48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:40:34.915178  756848 system_pods.go:61] "kindnet-pk58m" [12e1d8a7-de11-4047-85f7-4832c3a7e80c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 22:40:34.915190  756848 system_pods.go:61] "kube-apiserver-newest-cni-290425" [36218ab8-7cc4-4487-9dcd-5186adc9d4c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:40:34.915203  756848 system_pods.go:61] "kube-controller-manager-newest-cni-290425" [494bc2f7-8ec5-40bb-bd19-0c4a96b93532] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:40:34.915217  756848 system_pods.go:61] "kube-proxy-d866g" [ba6a46e3-367b-40d2-a919-35b062379af3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 22:40:34.915235  756848 system_pods.go:61] "kube-scheduler-newest-cni-290425" [69cd3450-9c48-455d-9bc0-b8f45eeb37c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:40:34.915246  756848 system_pods.go:61] "storage-provisioner" [d8b271bc-46b6-4d99-a6a2-27907f5afc55] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:40:34.915256  756848 system_pods.go:74] duration metric: took 3.987353ms to wait for pod list to return data ...
	I1027 22:40:34.915270  756848 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:40:34.917715  756848 default_sa.go:45] found service account: "default"
	I1027 22:40:34.917735  756848 default_sa.go:55] duration metric: took 2.459034ms for default service account to be created ...
	I1027 22:40:34.917746  756848 kubeadm.go:587] duration metric: took 3.195028043s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 22:40:34.917762  756848 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:40:34.920111  756848 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:40:34.920150  756848 node_conditions.go:123] node cpu capacity is 8
	I1027 22:40:34.920168  756848 node_conditions.go:105] duration metric: took 2.398457ms to run NodePressure ...
	I1027 22:40:34.920187  756848 start.go:242] waiting for startup goroutines ...
	I1027 22:40:34.920198  756848 start.go:247] waiting for cluster config update ...
	I1027 22:40:34.920210  756848 start.go:256] writing updated cluster config ...
	I1027 22:40:34.920542  756848 ssh_runner.go:195] Run: rm -f paused
	I1027 22:40:34.975966  756848 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:40:34.978357  756848 out.go:179] * Done! kubectl is now configured to use "newest-cni-290425" cluster and "default" namespace by default
	W1027 22:40:33.190344  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:35.689191  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 27 22:39:58 embed-certs-829976 crio[554]: time="2025-10-27T22:39:58.817070161Z" level=info msg="Created container a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05: kube-system/storage-provisioner/storage-provisioner" id=e6c65e25-0351-4d7e-966b-cbfa72ec7726 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:39:58 embed-certs-829976 crio[554]: time="2025-10-27T22:39:58.817711188Z" level=info msg="Starting container: a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05" id=6c7641c9-7bbc-4714-8ca9-87d4a149b953 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:39:58 embed-certs-829976 crio[554]: time="2025-10-27T22:39:58.819617848Z" level=info msg="Started container" PID=1749 containerID=a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05 description=kube-system/storage-provisioner/storage-provisioner id=6c7641c9-7bbc-4714-8ca9-87d4a149b953 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9d8b57a32c1d39ddc2a90f50b655a2ab3f2ba573afffaeb8f7ce811294a1b018
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.795963585Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d4b74e9b-b301-45d2-aaa7-11cdd58fb05f name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.799453127Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=11380d5b-9f8b-4e58-b5aa-273aa4e04596 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.802712203Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=7a58a280-570d-4cd0-b99b-ef667b7f7cf7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.80285717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.810830903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.811442718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.838902017Z" level=info msg="Created container ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=7a58a280-570d-4cd0-b99b-ef667b7f7cf7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.839656972Z" level=info msg="Starting container: ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1" id=b5d4d83a-72c6-4657-a28b-e00142af6f54 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.842182969Z" level=info msg="Started container" PID=1765 containerID=ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper id=b5d4d83a-72c6-4657-a28b-e00142af6f54 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3ffa6fdd156f9691791cff81f46f19b11ba77f2d5baf835b7207d1239ca1011
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.908208809Z" level=info msg="Removing container: 9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2" id=266d1a61-93c3-490d-96ea-95d7a39607ff name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:40:04 embed-certs-829976 crio[554]: time="2025-10-27T22:40:04.918829932Z" level=info msg="Removed container 9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=266d1a61-93c3-490d-96ea-95d7a39607ff name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.771406974Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e62edc76-e9b6-46ac-a0e6-9c7f8ea13bf6 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.772214696Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3691bbff-48b5-4ff6-a098-dc5e268920d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.7733535Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=d2e0af4c-a381-42b7-a47f-f9a162185239 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.773514735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.779266755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.779720083Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.807004316Z" level=info msg="Created container 8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=d2e0af4c-a381-42b7-a47f-f9a162185239 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.807655713Z" level=info msg="Starting container: 8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c" id=96ca145f-7707-448d-9fb8-78aec0313499 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.809395384Z" level=info msg="Started container" PID=1800 containerID=8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper id=96ca145f-7707-448d-9fb8-78aec0313499 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3ffa6fdd156f9691791cff81f46f19b11ba77f2d5baf835b7207d1239ca1011
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.976906407Z" level=info msg="Removing container: ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1" id=73a1b58c-78f7-4764-9203-010e384dc52a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:40:27 embed-certs-829976 crio[554]: time="2025-10-27T22:40:27.986897262Z" level=info msg="Removed container ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj/dashboard-metrics-scraper" id=73a1b58c-78f7-4764-9203-010e384dc52a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8bca0d942824a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   a3ffa6fdd156f       dashboard-metrics-scraper-6ffb444bf9-692mj   kubernetes-dashboard
	a243b7b9c5b09       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           40 seconds ago      Running             storage-provisioner         2                   9d8b57a32c1d3       storage-provisioner                          kube-system
	ba17aac76ebe9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   2047f771bfddd       kubernetes-dashboard-855c9754f9-lfssc        kubernetes-dashboard
	9447a45980928       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         1                   9d8b57a32c1d3       storage-provisioner                          kube-system
	1507cb3b17a78       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   f12aaa9c21f0e       busybox                                      default
	cbd067eb16796       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   187c4fc087f9b       kube-proxy-gf725                             kube-system
	51e1b51f1e8d7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   dc5d84442778b       coredns-66bc5c9577-msbj9                     kube-system
	36caab8434beb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   481ec4596d4e7       kindnet-dtjql                                kube-system
	2f44a2722d5cc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   677e4f0a1172a       kube-apiserver-embed-certs-829976            kube-system
	45a7ab4d45789       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   02d964aa0a0f0       etcd-embed-certs-829976                      kube-system
	9ebb5d429db0f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   1fa36f3424228       kube-scheduler-embed-certs-829976            kube-system
	e617c18783204       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   c9a0301067956       kube-controller-manager-embed-certs-829976   kube-system
	
	
	==> coredns [51e1b51f1e8d7456d9abb387421db6e13e287cb56c376344f144076a8be30b1b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56781 - 49169 "HINFO IN 1622635663925324999.5082171501692183482. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034279357s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-829976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-829976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=embed-certs-829976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_38_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:38:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-829976
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:40:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:40:13 +0000   Mon, 27 Oct 2025 22:38:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:40:13 +0000   Mon, 27 Oct 2025 22:38:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:40:13 +0000   Mon, 27 Oct 2025 22:38:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:40:13 +0000   Mon, 27 Oct 2025 22:38:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-829976
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                3b5f0575-3075-4eff-8d0c-0490f489999a
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-msbj9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-embed-certs-829976                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-dtjql                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-embed-certs-829976             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-embed-certs-829976    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-gf725                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-embed-certs-829976             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-692mj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lfssc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node embed-certs-829976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node embed-certs-829976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node embed-certs-829976 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s               node-controller  Node embed-certs-829976 event: Registered Node embed-certs-829976 in Controller
	  Normal  NodeReady                101s               kubelet          Node embed-certs-829976 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node embed-certs-829976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node embed-certs-829976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node embed-certs-829976 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-829976 event: Registered Node embed-certs-829976 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [45a7ab4d457895149bd74409ca1cf2067d30d698e93850bc8e3ded4ce106bbab] <==
	{"level":"info","ts":"2025-10-27T22:39:42.798034Z","caller":"traceutil/trace.go:172","msg":"trace[1573754819] transaction","detail":"{read_only:false; number_of_response:0; response_revision:439; }","duration":"136.050047ms","start":"2025-10-27T22:39:42.661964Z","end":"2025-10-27T22:39:42.798014Z","steps":["trace[1573754819] 'process raft request'  (duration: 135.958329ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:42.913410Z","caller":"traceutil/trace.go:172","msg":"trace[1480240731] linearizableReadLoop","detail":"{readStateIndex:462; appliedIndex:462; }","duration":"115.510307ms","start":"2025-10-27T22:39:42.797872Z","end":"2025-10-27T22:39:42.913382Z","steps":["trace[1480240731] 'read index received'  (duration: 115.505098ms)","trace[1480240731] 'applied index is now lower than readState.Index'  (duration: 4.27µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T22:39:42.913565Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.355348ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:1145"}
	{"level":"warn","ts":"2025-10-27T22:39:42.913537Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.585539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-829976\" limit:1 ","response":"range_response_count:1 size:5850"}
	{"level":"info","ts":"2025-10-27T22:39:42.913600Z","caller":"traceutil/trace.go:172","msg":"trace[486996741] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:1; response_revision:439; }","duration":"176.401611ms","start":"2025-10-27T22:39:42.737189Z","end":"2025-10-27T22:39:42.913590Z","steps":["trace[486996741] 'agreement among raft nodes before linearized reading'  (duration: 176.284169ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:42.913609Z","caller":"traceutil/trace.go:172","msg":"trace[120467976] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-829976; range_end:; response_count:1; response_revision:439; }","duration":"170.676258ms","start":"2025-10-27T22:39:42.742921Z","end":"2025-10-27T22:39:42.913598Z","steps":["trace[120467976] 'agreement among raft nodes before linearized reading'  (duration: 170.484881ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:42.913615Z","caller":"traceutil/trace.go:172","msg":"trace[591147520] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"245.528028ms","start":"2025-10-27T22:39:42.668069Z","end":"2025-10-27T22:39:42.913597Z","steps":["trace[591147520] 'process raft request'  (duration: 245.343279ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:42.937387Z","caller":"traceutil/trace.go:172","msg":"trace[1623333237] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"135.999727ms","start":"2025-10-27T22:39:42.801369Z","end":"2025-10-27T22:39:42.937369Z","steps":["trace[1623333237] 'process raft request'  (duration: 135.920507ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:39:42.937494Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.508907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-829976\" limit:1 ","response":"range_response_count:1 size:4807"}
	{"level":"info","ts":"2025-10-27T22:39:42.937542Z","caller":"traceutil/trace.go:172","msg":"trace[759877859] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-829976; range_end:; response_count:1; response_revision:441; }","duration":"138.569878ms","start":"2025-10-27T22:39:42.798963Z","end":"2025-10-27T22:39:42.937533Z","steps":["trace[759877859] 'agreement among raft nodes before linearized reading'  (duration: 138.440303ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:39:43.161661Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.218377ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" limit:1 ","response":"range_response_count:1 size:1210"}
	{"level":"info","ts":"2025-10-27T22:39:43.161731Z","caller":"traceutil/trace.go:172","msg":"trace[1595167566] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:443; }","duration":"116.30819ms","start":"2025-10-27T22:39:43.045410Z","end":"2025-10-27T22:39:43.161718Z","steps":["trace[1595167566] 'agreement among raft nodes before linearized reading'  (duration: 95.508249ms)","trace[1595167566] 'range keys from in-memory index tree'  (duration: 20.613693ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:39:43.161732Z","caller":"traceutil/trace.go:172","msg":"trace[500861968] transaction","detail":"{read_only:false; number_of_response:0; response_revision:443; }","duration":"122.018054ms","start":"2025-10-27T22:39:43.039661Z","end":"2025-10-27T22:39:43.161679Z","steps":["trace[500861968] 'process raft request'  (duration: 101.360338ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:43.161763Z","caller":"traceutil/trace.go:172","msg":"trace[258552786] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"121.501711ms","start":"2025-10-27T22:39:43.040242Z","end":"2025-10-27T22:39:43.161743Z","steps":["trace[258552786] 'process raft request'  (duration: 121.35371ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:39:43.386497Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.121035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:897"}
	{"level":"info","ts":"2025-10-27T22:39:43.386565Z","caller":"traceutil/trace.go:172","msg":"trace[456840584] range","detail":"{range_begin:/registry/namespaces/kubernetes-dashboard; range_end:; response_count:1; response_revision:446; }","duration":"122.194184ms","start":"2025-10-27T22:39:43.264358Z","end":"2025-10-27T22:39:43.386552Z","steps":["trace[456840584] 'range keys from in-memory index tree'  (duration: 121.955994ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:39:43.386497Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.10118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-829976\" limit:1 ","response":"range_response_count:1 size:4807"}
	{"level":"info","ts":"2025-10-27T22:39:43.386801Z","caller":"traceutil/trace.go:172","msg":"trace[1069850366] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-829976; range_end:; response_count:1; response_revision:446; }","duration":"122.377818ms","start":"2025-10-27T22:39:43.264364Z","end":"2025-10-27T22:39:43.386742Z","steps":["trace[1069850366] 'range keys from in-memory index tree'  (duration: 121.91151ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:39:43.570830Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.493568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:monitoring\" limit:1 ","response":"range_response_count:1 size:658"}
	{"level":"info","ts":"2025-10-27T22:39:43.570903Z","caller":"traceutil/trace.go:172","msg":"trace[710008890] range","detail":"{range_begin:/registry/clusterroles/system:monitoring; range_end:; response_count:1; response_revision:448; }","duration":"136.581743ms","start":"2025-10-27T22:39:43.434305Z","end":"2025-10-27T22:39:43.570887Z","steps":["trace[710008890] 'agreement among raft nodes before linearized reading'  (duration: 73.629594ms)","trace[710008890] 'range keys from in-memory index tree'  (duration: 62.726974ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T22:39:43.571078Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.679144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T22:39:43.571140Z","caller":"traceutil/trace.go:172","msg":"trace[1442354172] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:450; }","duration":"117.741319ms","start":"2025-10-27T22:39:43.453377Z","end":"2025-10-27T22:39:43.571118Z","steps":["trace[1442354172] 'agreement among raft nodes before linearized reading'  (duration: 117.643304ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:43.571230Z","caller":"traceutil/trace.go:172","msg":"trace[33738410] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"134.984573ms","start":"2025-10-27T22:39:43.436228Z","end":"2025-10-27T22:39:43.571212Z","steps":["trace[33738410] 'process raft request'  (duration: 134.626481ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:39:43.571260Z","caller":"traceutil/trace.go:172","msg":"trace[1607143507] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"138.318158ms","start":"2025-10-27T22:39:43.432931Z","end":"2025-10-27T22:39:43.571249Z","steps":["trace[1607143507] 'process raft request'  (duration: 75.091973ms)","trace[1607143507] 'compare'  (duration: 62.694886ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:39:52.070385Z","caller":"traceutil/trace.go:172","msg":"trace[119835233] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"118.126521ms","start":"2025-10-27T22:39:51.952235Z","end":"2025-10-27T22:39:52.070361Z","steps":["trace[119835233] 'process raft request'  (duration: 57.853558ms)","trace[119835233] 'compare'  (duration: 60.150298ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:40:39 up  2:22,  0 user,  load average: 4.65, 3.30, 2.95
	Linux embed-certs-829976 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36caab8434bebf2193c8f305a5d81c1aa34986386d7338a3bbd3c750f1b6e6db] <==
	I1027 22:39:43.904631       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:39:43.904693       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:39:43.904806       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:39:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:39:44.202390       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:39:44.202416       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:39:44.202434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:39:44.202585       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 22:39:44.202871       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 22:39:44.299391       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 22:39:44.299546       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 22:39:44.299762       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 22:39:45.802619       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:39:45.802650       1 metrics.go:72] Registering metrics
	I1027 22:39:45.802718       1 controller.go:711] "Syncing nftables rules"
	I1027 22:39:54.203146       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:39:54.203187       1 main.go:301] handling current node
	I1027 22:40:04.205083       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:40:04.205154       1 main.go:301] handling current node
	I1027 22:40:14.203122       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:40:14.203179       1 main.go:301] handling current node
	I1027 22:40:24.205060       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:40:24.205149       1 main.go:301] handling current node
	I1027 22:40:34.207612       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 22:40:34.207654       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2f44a2722d5ccd7616df1090c6bb0dbee4aa51ec06009ab3a0c5b8d4976586ea] <==
	I1027 22:39:42.450506       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 22:39:42.459737       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:39:42.464856       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 22:39:42.482356       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 22:39:42.482476       1 policy_source.go:240] refreshing policies
	I1027 22:39:42.484093       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 22:39:42.484222       1 aggregator.go:171] initial CRD sync complete...
	I1027 22:39:42.484265       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 22:39:42.484290       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:39:42.484298       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:39:42.488344       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:39:42.800879       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:39:42.937997       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:39:42.938111       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:39:42.938145       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:39:43.390763       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:39:43.432421       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:39:43.727408       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:39:43.739902       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:39:43.854912       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.220.243"}
	I1027 22:39:43.876618       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.67.84"}
	I1027 22:39:45.995483       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:39:45.995539       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:39:46.196145       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:39:46.346986       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e617c18783204a4f1e575bdec7825512002bad31cb3b04208481ca9f4c563564] <==
	I1027 22:39:45.792880       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:39:45.792907       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:39:45.793064       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:39:45.793169       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-829976"
	I1027 22:39:45.793242       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 22:39:45.793674       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 22:39:45.793803       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 22:39:45.795080       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 22:39:45.797359       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 22:39:45.798194       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 22:39:45.798255       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:39:45.798280       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:39:45.798285       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:39:45.798290       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:39:45.799716       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:39:45.800904       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 22:39:45.800926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 22:39:45.802111       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 22:39:45.802133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 22:39:45.807337       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 22:39:45.807359       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 22:39:45.809631       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 22:39:45.809736       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 22:39:45.810927       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:39:45.815123       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cbd067eb1679660eec922b92c264ea4b4019dcf99c1f78856e648b52f27cb061] <==
	I1027 22:39:43.857085       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:39:43.928385       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:39:44.029447       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:39:44.029483       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 22:39:44.029574       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:39:44.141507       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:39:44.141585       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:39:44.148806       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:39:44.151213       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:39:44.151239       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:39:44.154283       1 config.go:200] "Starting service config controller"
	I1027 22:39:44.154361       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:39:44.154757       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:39:44.155317       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:39:44.154771       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:39:44.155379       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:39:44.154894       1 config.go:309] "Starting node config controller"
	I1027 22:39:44.155446       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:39:44.155471       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:39:44.256187       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:39:44.256174       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:39:44.256220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9ebb5d429db0f5d2cfac0c88b414dd785a0b2d57b9fcfeb926197b670710530b] <==
	I1027 22:39:41.899143       1 serving.go:386] Generated self-signed cert in-memory
	I1027 22:39:43.037408       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:39:43.037442       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:39:43.093025       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:39:43.093021       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 22:39:43.093079       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 22:39:43.093031       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:39:43.093199       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:39:43.093081       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:39:43.093498       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:39:43.093808       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:39:43.193635       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:39:43.193687       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 22:39:43.193728       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:39:46 embed-certs-829976 kubelet[710]: I1027 22:39:46.512131     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl9xr\" (UniqueName: \"kubernetes.io/projected/9b2e681b-9a25-4761-a5b6-5c3800ecbc39-kube-api-access-gl9xr\") pod \"kubernetes-dashboard-855c9754f9-lfssc\" (UID: \"9b2e681b-9a25-4761-a5b6-5c3800ecbc39\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lfssc"
	Oct 27 22:39:46 embed-certs-829976 kubelet[710]: I1027 22:39:46.512160     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9b2e681b-9a25-4761-a5b6-5c3800ecbc39-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-lfssc\" (UID: \"9b2e681b-9a25-4761-a5b6-5c3800ecbc39\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lfssc"
	Oct 27 22:39:49 embed-certs-829976 kubelet[710]: I1027 22:39:49.908308     710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 22:39:51 embed-certs-829976 kubelet[710]: I1027 22:39:51.901036     710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lfssc" podStartSLOduration=1.6162784430000001 podStartE2EDuration="5.901012419s" podCreationTimestamp="2025-10-27 22:39:46 +0000 UTC" firstStartedPulling="2025-10-27 22:39:46.75884191 +0000 UTC m=+7.088816919" lastFinishedPulling="2025-10-27 22:39:51.043575886 +0000 UTC m=+11.373550895" observedRunningTime="2025-10-27 22:39:51.900886686 +0000 UTC m=+12.230861713" watchObservedRunningTime="2025-10-27 22:39:51.901012419 +0000 UTC m=+12.230987467"
	Oct 27 22:39:53 embed-certs-829976 kubelet[710]: I1027 22:39:53.867109     710 scope.go:117] "RemoveContainer" containerID="a7e3857bc1af61fb0d46132ecb316ba3c8e9f4d0973c3bb973d3ebc33409d93e"
	Oct 27 22:39:54 embed-certs-829976 kubelet[710]: I1027 22:39:54.872071     710 scope.go:117] "RemoveContainer" containerID="a7e3857bc1af61fb0d46132ecb316ba3c8e9f4d0973c3bb973d3ebc33409d93e"
	Oct 27 22:39:54 embed-certs-829976 kubelet[710]: I1027 22:39:54.872234     710 scope.go:117] "RemoveContainer" containerID="9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2"
	Oct 27 22:39:54 embed-certs-829976 kubelet[710]: E1027 22:39:54.872417     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-692mj_kubernetes-dashboard(a1761b09-0d81-4e8f-89c3-743f9a6b0e1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj" podUID="a1761b09-0d81-4e8f-89c3-743f9a6b0e1c"
	Oct 27 22:39:55 embed-certs-829976 kubelet[710]: I1027 22:39:55.877447     710 scope.go:117] "RemoveContainer" containerID="9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2"
	Oct 27 22:39:55 embed-certs-829976 kubelet[710]: E1027 22:39:55.877672     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-692mj_kubernetes-dashboard(a1761b09-0d81-4e8f-89c3-743f9a6b0e1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj" podUID="a1761b09-0d81-4e8f-89c3-743f9a6b0e1c"
	Oct 27 22:39:58 embed-certs-829976 kubelet[710]: I1027 22:39:58.769412     710 scope.go:117] "RemoveContainer" containerID="9447a459809287a6975180679ff62303ef8cb2896cb8860c8aeb82b1b5e8bc3c"
	Oct 27 22:40:04 embed-certs-829976 kubelet[710]: I1027 22:40:04.795315     710 scope.go:117] "RemoveContainer" containerID="9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2"
	Oct 27 22:40:04 embed-certs-829976 kubelet[710]: I1027 22:40:04.906800     710 scope.go:117] "RemoveContainer" containerID="9c996beb64bac401eec02ecd918eae28dda843bcbf3030a034a12aea7b8a10e2"
	Oct 27 22:40:04 embed-certs-829976 kubelet[710]: I1027 22:40:04.907087     710 scope.go:117] "RemoveContainer" containerID="ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1"
	Oct 27 22:40:04 embed-certs-829976 kubelet[710]: E1027 22:40:04.907280     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-692mj_kubernetes-dashboard(a1761b09-0d81-4e8f-89c3-743f9a6b0e1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj" podUID="a1761b09-0d81-4e8f-89c3-743f9a6b0e1c"
	Oct 27 22:40:14 embed-certs-829976 kubelet[710]: I1027 22:40:14.795527     710 scope.go:117] "RemoveContainer" containerID="ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1"
	Oct 27 22:40:14 embed-certs-829976 kubelet[710]: E1027 22:40:14.795722     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-692mj_kubernetes-dashboard(a1761b09-0d81-4e8f-89c3-743f9a6b0e1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj" podUID="a1761b09-0d81-4e8f-89c3-743f9a6b0e1c"
	Oct 27 22:40:27 embed-certs-829976 kubelet[710]: I1027 22:40:27.770984     710 scope.go:117] "RemoveContainer" containerID="ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1"
	Oct 27 22:40:27 embed-certs-829976 kubelet[710]: I1027 22:40:27.975547     710 scope.go:117] "RemoveContainer" containerID="ca2496591a1aa1c1c8a66d0109e6412c0aa40719a7a82a8e39377ab3e747daf1"
	Oct 27 22:40:27 embed-certs-829976 kubelet[710]: I1027 22:40:27.975745     710 scope.go:117] "RemoveContainer" containerID="8bca0d942824a617858e34d5d8e0d4ee376c804cb3b925fb0b606c87f2bcbd4c"
	Oct 27 22:40:27 embed-certs-829976 kubelet[710]: E1027 22:40:27.975968     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-692mj_kubernetes-dashboard(a1761b09-0d81-4e8f-89c3-743f9a6b0e1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-692mj" podUID="a1761b09-0d81-4e8f-89c3-743f9a6b0e1c"
	Oct 27 22:40:33 embed-certs-829976 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:40:33 embed-certs-829976 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:40:33 embed-certs-829976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 22:40:33 embed-certs-829976 systemd[1]: kubelet.service: Consumed 1.809s CPU time.
	
	
	==> kubernetes-dashboard [ba17aac76ebe9be83a75950a4ef6b7b6315fa93827de262d9593d4e97bbdf936] <==
	2025/10/27 22:39:51 Using namespace: kubernetes-dashboard
	2025/10/27 22:39:51 Using in-cluster config to connect to apiserver
	2025/10/27 22:39:51 Using secret token for csrf signing
	2025/10/27 22:39:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 22:39:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 22:39:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 22:39:51 Generating JWE encryption key
	2025/10/27 22:39:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 22:39:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 22:39:51 Initializing JWE encryption key from synchronized object
	2025/10/27 22:39:51 Creating in-cluster Sidecar client
	2025/10/27 22:39:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:39:51 Serving insecurely on HTTP port: 9090
	2025/10/27 22:40:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:39:51 Starting overwatch
	
	
	==> storage-provisioner [9447a459809287a6975180679ff62303ef8cb2896cb8860c8aeb82b1b5e8bc3c] <==
	I1027 22:39:43.896703       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 22:39:43.898923       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a243b7b9c5b092855cce3460a422aa4d749dfa23b6f0409b753011588c082a05] <==
	W1027 22:40:16.241010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:16.247361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 22:40:16.339393       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-829976_197dc557-f0f6-4608-99c9-5a723663949b!
	W1027 22:40:18.250824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:18.255883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:20.260622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:20.266570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:22.270362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:22.275501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:24.278709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:24.284124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:26.288008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:26.292155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:28.295929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:28.300836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:30.304715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:30.308900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:32.311967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:32.315821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:34.318716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:34.323141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:36.326483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:36.330293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:38.333643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:38.341008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-829976 -n embed-certs-829976
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-829976 -n embed-certs-829976: exit status 2 (389.569745ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-829976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-290425 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-290425 --alsologtostderr -v=1: exit status 80 (2.379539993s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-290425 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:40:35.716359  759457 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:40:35.716653  759457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:35.716664  759457 out.go:374] Setting ErrFile to fd 2...
	I1027 22:40:35.716669  759457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:35.716909  759457 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:40:35.717231  759457 out.go:368] Setting JSON to false
	I1027 22:40:35.717283  759457 mustload.go:66] Loading cluster: newest-cni-290425
	I1027 22:40:35.717766  759457 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:35.718407  759457 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:35.740595  759457 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:35.740975  759457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:35.813051  759457 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-27 22:40:35.80128249 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:35.814055  759457 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-290425 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 22:40:35.815993  759457 out.go:179] * Pausing node newest-cni-290425 ... 
	I1027 22:40:35.817223  759457 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:35.817568  759457 ssh_runner.go:195] Run: systemctl --version
	I1027 22:40:35.817612  759457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:35.836257  759457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:35.943068  759457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:40:35.957834  759457 pause.go:52] kubelet running: true
	I1027 22:40:35.957899  759457 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:40:36.121178  759457 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:40:36.121266  759457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:40:36.215192  759457 cri.go:89] found id: "3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465"
	I1027 22:40:36.215214  759457 cri.go:89] found id: "8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a"
	I1027 22:40:36.215220  759457 cri.go:89] found id: "5c6f16a2765ac4bdb8db042d29939ff67bdd1db836137d98bd170e7d1e41a727"
	I1027 22:40:36.215225  759457 cri.go:89] found id: "e2e676795ba20aae505a22108af3c33b27b2039e426adf854bbcfe4ed785f295"
	I1027 22:40:36.215228  759457 cri.go:89] found id: "bcada78a58b8a8ca59f0601dbbe5b52ebef3f5b2e055602ea90f951529aca61f"
	I1027 22:40:36.215233  759457 cri.go:89] found id: "54cf126c5f01241f27207f3fdf1efb544769da6c6c0566f35c8387449126358d"
	I1027 22:40:36.215237  759457 cri.go:89] found id: ""
	I1027 22:40:36.215288  759457 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:40:36.230403  759457 retry.go:31] will retry after 276.651489ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:36Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:40:36.507774  759457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:40:36.521499  759457 pause.go:52] kubelet running: false
	I1027 22:40:36.521554  759457 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:40:36.674258  759457 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:40:36.674346  759457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:40:36.756935  759457 cri.go:89] found id: "3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465"
	I1027 22:40:36.756970  759457 cri.go:89] found id: "8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a"
	I1027 22:40:36.756977  759457 cri.go:89] found id: "5c6f16a2765ac4bdb8db042d29939ff67bdd1db836137d98bd170e7d1e41a727"
	I1027 22:40:36.756984  759457 cri.go:89] found id: "e2e676795ba20aae505a22108af3c33b27b2039e426adf854bbcfe4ed785f295"
	I1027 22:40:36.756987  759457 cri.go:89] found id: "bcada78a58b8a8ca59f0601dbbe5b52ebef3f5b2e055602ea90f951529aca61f"
	I1027 22:40:36.756990  759457 cri.go:89] found id: "54cf126c5f01241f27207f3fdf1efb544769da6c6c0566f35c8387449126358d"
	I1027 22:40:36.756992  759457 cri.go:89] found id: ""
	I1027 22:40:36.757027  759457 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:40:36.771513  759457 retry.go:31] will retry after 343.730828ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:36Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:40:37.116106  759457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:40:37.130111  759457 pause.go:52] kubelet running: false
	I1027 22:40:37.130177  759457 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:40:37.284314  759457 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:40:37.284408  759457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:40:37.375425  759457 cri.go:89] found id: "3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465"
	I1027 22:40:37.375465  759457 cri.go:89] found id: "8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a"
	I1027 22:40:37.375483  759457 cri.go:89] found id: "5c6f16a2765ac4bdb8db042d29939ff67bdd1db836137d98bd170e7d1e41a727"
	I1027 22:40:37.375488  759457 cri.go:89] found id: "e2e676795ba20aae505a22108af3c33b27b2039e426adf854bbcfe4ed785f295"
	I1027 22:40:37.375492  759457 cri.go:89] found id: "bcada78a58b8a8ca59f0601dbbe5b52ebef3f5b2e055602ea90f951529aca61f"
	I1027 22:40:37.375497  759457 cri.go:89] found id: "54cf126c5f01241f27207f3fdf1efb544769da6c6c0566f35c8387449126358d"
	I1027 22:40:37.375501  759457 cri.go:89] found id: ""
	I1027 22:40:37.375554  759457 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:40:37.391190  759457 retry.go:31] will retry after 367.191074ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:37Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:40:37.758697  759457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:40:37.772403  759457 pause.go:52] kubelet running: false
	I1027 22:40:37.772459  759457 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:40:37.915901  759457 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:40:37.916039  759457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:40:37.994751  759457 cri.go:89] found id: "3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465"
	I1027 22:40:37.994777  759457 cri.go:89] found id: "8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a"
	I1027 22:40:37.994783  759457 cri.go:89] found id: "5c6f16a2765ac4bdb8db042d29939ff67bdd1db836137d98bd170e7d1e41a727"
	I1027 22:40:37.994787  759457 cri.go:89] found id: "e2e676795ba20aae505a22108af3c33b27b2039e426adf854bbcfe4ed785f295"
	I1027 22:40:37.994792  759457 cri.go:89] found id: "bcada78a58b8a8ca59f0601dbbe5b52ebef3f5b2e055602ea90f951529aca61f"
	I1027 22:40:37.994796  759457 cri.go:89] found id: "54cf126c5f01241f27207f3fdf1efb544769da6c6c0566f35c8387449126358d"
	I1027 22:40:37.994801  759457 cri.go:89] found id: ""
	I1027 22:40:37.994846  759457 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:40:38.009599  759457 out.go:203] 
	W1027 22:40:38.011127  759457 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:40:38.011151  759457 out.go:285] * 
	* 
	W1027 22:40:38.015694  759457 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:40:38.017198  759457 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-290425 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-290425
helpers_test.go:243: (dbg) docker inspect newest-cni-290425:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c",
	        "Created": "2025-10-27T22:39:37.68348506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 757058,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:40:24.710539295Z",
	            "FinishedAt": "2025-10-27T22:40:23.132901225Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/hostname",
	        "HostsPath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/hosts",
	        "LogPath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c-json.log",
	        "Name": "/newest-cni-290425",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-290425:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-290425",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c",
	                "LowerDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-290425",
	                "Source": "/var/lib/docker/volumes/newest-cni-290425/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-290425",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-290425",
	                "name.minikube.sigs.k8s.io": "newest-cni-290425",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3d04afb879a9bb2efb0328ea75d4db987654b1ef2f4a14765e315562ca19b797",
	            "SandboxKey": "/var/run/docker/netns/3d04afb879a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-290425": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:68:d9:d0:79:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "882fc6de2a096110b95ca3e32de921ddc1344df620994b742636f3034ae19fad",
	                    "EndpointID": "7cb628f5a72003d419d9837f248be14de0773ab0405f730e3157ac9ac19b883c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-290425",
	                        "56a3b8496171"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-290425 -n newest-cni-290425
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-290425 -n newest-cni-290425: exit status 2 (373.27987ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-290425 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-290425 logs -n 25: (1.069580978s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ delete  │ -p kubernetes-upgrade-695499                                                                                                                                                                                                                  │ kubernetes-upgrade-695499    │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ delete  │ -p no-preload-188814                                                                                                                                                                                                                          │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ delete  │ -p no-preload-188814                                                                                                                                                                                                                          │ no-preload-188814            │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:39 UTC │
	│ start   │ -p auto-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-927034 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-290425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ stop    │ -p newest-cni-290425 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927034 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 pgrep -a kubelet                                                                                                                                                                                                               │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable dashboard -p newest-cni-290425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ image   │ embed-certs-829976 image list --format=json                                                                                                                                                                                                   │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ pause   │ -p embed-certs-829976 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ image   │ newest-cni-290425 image list --format=json                                                                                                                                                                                                    │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ pause   │ -p newest-cni-290425 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo cat /etc/nsswitch.conf                                                                                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/hosts                                                                                                                                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/resolv.conf                                                                                                                                                                                                      │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo crictl pods                                                                                                                                                                                                               │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo crictl ps --all                                                                                                                                                                                                           │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:40:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:40:24.438209  756848 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:40:24.438329  756848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:24.438340  756848 out.go:374] Setting ErrFile to fd 2...
	I1027 22:40:24.438345  756848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:24.438673  756848 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:40:24.439297  756848 out.go:368] Setting JSON to false
	I1027 22:40:24.440841  756848 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8563,"bootTime":1761596261,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:40:24.440961  756848 start.go:143] virtualization: kvm guest
	I1027 22:40:24.442921  756848 out.go:179] * [newest-cni-290425] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:40:24.445592  756848 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:40:24.445629  756848 notify.go:221] Checking for updates...
	I1027 22:40:24.448124  756848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:40:24.449565  756848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:24.451090  756848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:40:24.452160  756848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:40:24.456462  756848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:40:24.458338  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:24.459094  756848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:40:24.488803  756848 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:40:24.488892  756848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:24.557828  756848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:40:24.546418647 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:24.557998  756848 docker.go:318] overlay module found
	I1027 22:40:24.559462  756848 out.go:179] * Using the docker driver based on existing profile
	I1027 22:40:24.560558  756848 start.go:307] selected driver: docker
	I1027 22:40:24.560578  756848 start.go:928] validating driver "docker" against &{Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:24.560718  756848 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:40:24.561602  756848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:24.632177  756848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:40:24.620016626 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:24.632569  756848 start_flags.go:1010] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 22:40:24.632600  756848 cni.go:84] Creating CNI manager for ""
	I1027 22:40:24.632673  756848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:40:24.632732  756848 start.go:351] cluster config:
	{Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:24.634382  756848 out.go:179] * Starting "newest-cni-290425" primary control-plane node in "newest-cni-290425" cluster
	I1027 22:40:24.635369  756848 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:40:24.636382  756848 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:40:24.637272  756848 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:24.637317  756848 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:40:24.637329  756848 cache.go:59] Caching tarball of preloaded images
	I1027 22:40:24.637336  756848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:40:24.637435  756848 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:40:24.637450  756848 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:40:24.637576  756848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/config.json ...
	I1027 22:40:24.659489  756848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:40:24.659511  756848 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:40:24.659527  756848 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:40:24.659550  756848 start.go:360] acquireMachinesLock for newest-cni-290425: {Name:mk4e0aa51aaa1a604f2ac1e14d4e9ad4994a6e85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:40:24.659621  756848 start.go:364] duration metric: took 41.13µs to acquireMachinesLock for "newest-cni-290425"
	I1027 22:40:24.659640  756848 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:40:24.659645  756848 fix.go:55] fixHost starting: 
	I1027 22:40:24.659871  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:24.679073  756848 fix.go:113] recreateIfNeeded on newest-cni-290425: state=Stopped err=<nil>
	W1027 22:40:24.679130  756848 fix.go:139] unexpected machine state, will restart: <nil>
	W1027 22:40:24.188623  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:26.687852  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:24.681022  756848 out.go:252] * Restarting existing docker container for "newest-cni-290425" ...
	I1027 22:40:24.681102  756848 cli_runner.go:164] Run: docker start newest-cni-290425
	I1027 22:40:24.992255  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:25.014514  756848 kic.go:430] container "newest-cni-290425" state is running.
	I1027 22:40:25.015046  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:25.038668  756848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/config.json ...
	I1027 22:40:25.038987  756848 machine.go:94] provisionDockerMachine start ...
	I1027 22:40:25.039099  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:25.061826  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:25.062260  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:25.062285  756848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:40:25.063188  756848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45146->127.0.0.1:33103: read: connection reset by peer
	I1027 22:40:28.204458  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:40:28.204492  756848 ubuntu.go:182] provisioning hostname "newest-cni-290425"
	I1027 22:40:28.204559  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.222514  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.222737  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.222759  756848 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-290425 && echo "newest-cni-290425" | sudo tee /etc/hostname
	I1027 22:40:28.375236  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:40:28.375318  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.392770  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.393063  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.393082  756848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-290425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-290425/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-290425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:40:28.533683  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:40:28.533712  756848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:40:28.533740  756848 ubuntu.go:190] setting up certificates
	I1027 22:40:28.533756  756848 provision.go:84] configureAuth start
	I1027 22:40:28.533832  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:28.551092  756848 provision.go:143] copyHostCerts
	I1027 22:40:28.551157  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:40:28.551183  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:40:28.551262  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:40:28.551424  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:40:28.551439  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:40:28.551489  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:40:28.551578  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:40:28.551589  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:40:28.551627  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:40:28.551720  756848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-290425 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-290425]
	I1027 22:40:28.786512  756848 provision.go:177] copyRemoteCerts
	I1027 22:40:28.786589  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:40:28.786645  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.804351  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:28.905399  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:40:28.923296  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:40:28.940336  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:40:28.958758  756848 provision.go:87] duration metric: took 424.98667ms to configureAuth
	I1027 22:40:28.958786  756848 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:40:28.959034  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:28.959153  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.977021  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.977337  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.977362  756848 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:40:29.254601  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:40:29.254630  756848 machine.go:97] duration metric: took 4.215620835s to provisionDockerMachine
	I1027 22:40:29.254645  756848 start.go:293] postStartSetup for "newest-cni-290425" (driver="docker")
	I1027 22:40:29.254658  756848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:40:29.254744  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:40:29.254799  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.272656  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.373656  756848 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:40:29.377346  756848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:40:29.377381  756848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:40:29.377394  756848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:40:29.377439  756848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:40:29.377507  756848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:40:29.377598  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:40:29.385749  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:29.403339  756848 start.go:296] duration metric: took 148.678819ms for postStartSetup
	I1027 22:40:29.403416  756848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:40:29.403473  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.421865  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.520183  756848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:40:29.524936  756848 fix.go:57] duration metric: took 4.865280599s for fixHost
	I1027 22:40:29.524989  756848 start.go:83] releasing machines lock for "newest-cni-290425", held for 4.865355811s
	I1027 22:40:29.525055  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:29.542221  756848 ssh_runner.go:195] Run: cat /version.json
	I1027 22:40:29.542269  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.542325  756848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:40:29.542380  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.560078  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.560376  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.658503  756848 ssh_runner.go:195] Run: systemctl --version
	I1027 22:40:29.714758  756848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:40:29.751819  756848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:40:29.757527  756848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:40:29.757592  756848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:40:29.766082  756848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:40:29.766107  756848 start.go:496] detecting cgroup driver to use...
	I1027 22:40:29.766144  756848 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:40:29.766201  756848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:40:29.782220  756848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:40:29.795704  756848 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:40:29.795756  756848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:40:29.811814  756848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:40:29.824770  756848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:40:29.911398  756848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:40:30.002621  756848 docker.go:234] disabling docker service ...
	I1027 22:40:30.002705  756848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:40:30.018425  756848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:40:30.032066  756848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:40:30.126259  756848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:40:30.224136  756848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:40:30.240695  756848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:40:30.262231  756848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:40:30.262309  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.272017  756848 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:40:30.272077  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.281097  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.290459  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.299765  756848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:40:30.308783  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.318037  756848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.326660  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.335545  756848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:40:30.343816  756848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:40:30.351923  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:30.438807  756848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:40:30.541588  756848 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:40:30.541647  756848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:40:30.545709  756848 start.go:564] Will wait 60s for crictl version
	I1027 22:40:30.545763  756848 ssh_runner.go:195] Run: which crictl
	I1027 22:40:30.549390  756848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:40:30.574840  756848 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:40:30.574912  756848 ssh_runner.go:195] Run: crio --version
	I1027 22:40:30.603907  756848 ssh_runner.go:195] Run: crio --version
	I1027 22:40:30.635251  756848 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:40:30.636309  756848 cli_runner.go:164] Run: docker network inspect newest-cni-290425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:40:30.652517  756848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 22:40:30.656856  756848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:30.668683  756848 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1027 22:40:30.669554  756848 kubeadm.go:884] updating cluster {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:40:30.669731  756848 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:30.669822  756848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:30.704544  756848 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:30.704566  756848 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:40:30.704611  756848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:30.734075  756848 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:30.734098  756848 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:40:30.734106  756848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 22:40:30.734202  756848 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-290425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:40:30.734273  756848 ssh_runner.go:195] Run: crio config
	I1027 22:40:30.780046  756848 cni.go:84] Creating CNI manager for ""
	I1027 22:40:30.780067  756848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:40:30.780090  756848 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 22:40:30.780113  756848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-290425 NodeName:newest-cni-290425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:40:30.780240  756848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-290425"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:40:30.780304  756848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:40:30.788709  756848 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:40:30.788776  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:40:30.796691  756848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 22:40:30.809324  756848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:40:30.821977  756848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1027 22:40:30.834850  756848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:40:30.838629  756848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:30.848598  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:30.930756  756848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:30.960505  756848 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425 for IP: 192.168.76.2
	I1027 22:40:30.960526  756848 certs.go:195] generating shared ca certs ...
	I1027 22:40:30.960549  756848 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:30.960716  756848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:40:30.960760  756848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:40:30.960770  756848 certs.go:257] generating profile certs ...
	I1027 22:40:30.960854  756848 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.key
	I1027 22:40:30.960928  756848 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67
	I1027 22:40:30.961028  756848 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key
	I1027 22:40:30.961171  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:40:30.961204  756848 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:40:30.961217  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:40:30.961254  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:40:30.961289  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:40:30.961318  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:40:30.961382  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:30.962311  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:40:30.982191  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:40:31.003485  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:40:31.024750  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:40:31.051339  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 22:40:31.070810  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:40:31.089035  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:40:31.107252  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:40:31.124793  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:40:31.142653  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:40:31.162599  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:40:31.180139  756848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:40:31.194578  756848 ssh_runner.go:195] Run: openssl version
	I1027 22:40:31.200775  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:40:31.210145  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.214047  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.214105  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.252428  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:40:31.261073  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:40:31.270127  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.274120  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.274183  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.309111  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:40:31.317698  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:40:31.326420  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.330243  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.330307  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.365724  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:40:31.374331  756848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:40:31.378340  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:40:31.413065  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:40:31.448812  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:40:31.492414  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:40:31.536913  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:40:31.581567  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:40:31.637412  756848 kubeadm.go:401] StartCluster: {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:31.637550  756848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:40:31.637610  756848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:40:31.673955  756848 cri.go:89] found id: "5c6f16a2765ac4bdb8db042d29939ff67bdd1db836137d98bd170e7d1e41a727"
	I1027 22:40:31.673983  756848 cri.go:89] found id: "e2e676795ba20aae505a22108af3c33b27b2039e426adf854bbcfe4ed785f295"
	I1027 22:40:31.673988  756848 cri.go:89] found id: "bcada78a58b8a8ca59f0601dbbe5b52ebef3f5b2e055602ea90f951529aca61f"
	I1027 22:40:31.673993  756848 cri.go:89] found id: "54cf126c5f01241f27207f3fdf1efb544769da6c6c0566f35c8387449126358d"
	I1027 22:40:31.673996  756848 cri.go:89] found id: ""
	I1027 22:40:31.674047  756848 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:40:31.687812  756848 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:31Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:40:31.687887  756848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:40:31.697214  756848 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:40:31.697231  756848 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:40:31.697274  756848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:40:31.705188  756848 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:40:31.706218  756848 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-290425" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:31.706815  756848 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-482142/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-290425" cluster setting kubeconfig missing "newest-cni-290425" context setting]
	I1027 22:40:31.708077  756848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.710194  756848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:40:31.719725  756848 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 22:40:31.719756  756848 kubeadm.go:602] duration metric: took 22.519377ms to restartPrimaryControlPlane
	I1027 22:40:31.719767  756848 kubeadm.go:403] duration metric: took 82.367104ms to StartCluster
	I1027 22:40:31.719783  756848 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.719848  756848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:31.722417  756848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.722691  756848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:40:31.722773  756848 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:40:31.722874  756848 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-290425"
	I1027 22:40:31.722893  756848 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-290425"
	W1027 22:40:31.722902  756848 addons.go:247] addon storage-provisioner should already be in state true
	I1027 22:40:31.722931  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.722937  756848 addons.go:69] Setting dashboard=true in profile "newest-cni-290425"
	I1027 22:40:31.722973  756848 addons.go:238] Setting addon dashboard=true in "newest-cni-290425"
	W1027 22:40:31.722982  756848 addons.go:247] addon dashboard should already be in state true
	I1027 22:40:31.722987  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:31.723014  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.723047  756848 addons.go:69] Setting default-storageclass=true in profile "newest-cni-290425"
	I1027 22:40:31.723064  756848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-290425"
	I1027 22:40:31.723353  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.723550  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.723800  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.725730  756848 out.go:179] * Verifying Kubernetes components...
	I1027 22:40:31.726934  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:31.749813  756848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:40:31.749929  756848 addons.go:238] Setting addon default-storageclass=true in "newest-cni-290425"
	W1027 22:40:31.749966  756848 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:40:31.750012  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.750560  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.750761  756848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 22:40:31.750784  756848 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:40:31.750805  756848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:40:31.750863  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.756414  756848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1027 22:40:29.188109  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:31.188378  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:31.757286  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 22:40:31.757307  756848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 22:40:31.757368  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.788482  756848 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:40:31.788523  756848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:40:31.788585  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.788473  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.791269  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.812300  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.876427  756848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:31.890087  756848 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:40:31.890171  756848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:40:31.904610  756848 api_server.go:72] duration metric: took 181.883596ms to wait for apiserver process to appear ...
	I1027 22:40:31.904641  756848 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:40:31.904675  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:31.911250  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:40:31.913745  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 22:40:31.913771  756848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 22:40:31.928922  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 22:40:31.928985  756848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 22:40:31.937602  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:40:31.944700  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 22:40:31.944729  756848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 22:40:31.965934  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 22:40:31.965991  756848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 22:40:31.983504  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 22:40:31.983534  756848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 22:40:32.000875  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 22:40:32.000897  756848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 22:40:32.015058  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 22:40:32.015175  756848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 22:40:32.028828  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 22:40:32.028864  756848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 22:40:32.042288  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:40:32.042313  756848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 22:40:32.055615  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:40:33.250603  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:40:33.250644  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:40:33.250661  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.259803  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:40:33.259841  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:40:33.405243  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.410997  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:33.411027  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:33.838488  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.927200802s)
	I1027 22:40:33.838547  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.900910209s)
	I1027 22:40:33.838682  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.783022367s)
	I1027 22:40:33.840157  756848 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-290425 addons enable metrics-server
	
	I1027 22:40:33.849803  756848 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 22:40:33.851036  756848 addons.go:514] duration metric: took 2.128273879s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 22:40:33.905759  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.909856  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:33.909880  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:34.405178  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:34.409922  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:34.409969  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:34.905382  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:34.910198  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 22:40:34.911208  756848 api_server.go:141] control plane version: v1.34.1
	I1027 22:40:34.911251  756848 api_server.go:131] duration metric: took 3.006601962s to wait for apiserver health ...
	I1027 22:40:34.911260  756848 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:40:34.915094  756848 system_pods.go:59] 8 kube-system pods found
	I1027 22:40:34.915146  756848 system_pods.go:61] "coredns-66bc5c9577-hmtz5" [d0253fb1-e66b-448e-8b6d-e9882120ffd2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:40:34.915160  756848 system_pods.go:61] "etcd-newest-cni-290425" [fa08a886-4040-46e0-9e58-975345432c48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:40:34.915178  756848 system_pods.go:61] "kindnet-pk58m" [12e1d8a7-de11-4047-85f7-4832c3a7e80c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 22:40:34.915190  756848 system_pods.go:61] "kube-apiserver-newest-cni-290425" [36218ab8-7cc4-4487-9dcd-5186adc9d4c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:40:34.915203  756848 system_pods.go:61] "kube-controller-manager-newest-cni-290425" [494bc2f7-8ec5-40bb-bd19-0c4a96b93532] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:40:34.915217  756848 system_pods.go:61] "kube-proxy-d866g" [ba6a46e3-367b-40d2-a919-35b062379af3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 22:40:34.915235  756848 system_pods.go:61] "kube-scheduler-newest-cni-290425" [69cd3450-9c48-455d-9bc0-b8f45eeb37c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:40:34.915246  756848 system_pods.go:61] "storage-provisioner" [d8b271bc-46b6-4d99-a6a2-27907f5afc55] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:40:34.915256  756848 system_pods.go:74] duration metric: took 3.987353ms to wait for pod list to return data ...
	I1027 22:40:34.915270  756848 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:40:34.917715  756848 default_sa.go:45] found service account: "default"
	I1027 22:40:34.917735  756848 default_sa.go:55] duration metric: took 2.459034ms for default service account to be created ...
	I1027 22:40:34.917746  756848 kubeadm.go:587] duration metric: took 3.195028043s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 22:40:34.917762  756848 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:40:34.920111  756848 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:40:34.920150  756848 node_conditions.go:123] node cpu capacity is 8
	I1027 22:40:34.920168  756848 node_conditions.go:105] duration metric: took 2.398457ms to run NodePressure ...
	I1027 22:40:34.920187  756848 start.go:242] waiting for startup goroutines ...
	I1027 22:40:34.920198  756848 start.go:247] waiting for cluster config update ...
	I1027 22:40:34.920210  756848 start.go:256] writing updated cluster config ...
	I1027 22:40:34.920542  756848 ssh_runner.go:195] Run: rm -f paused
	I1027 22:40:34.975966  756848 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:40:34.978357  756848 out.go:179] * Done! kubectl is now configured to use "newest-cni-290425" cluster and "default" namespace by default
	W1027 22:40:33.190344  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:35.689191  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.339027833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.342465456Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0045808a-209d-48de-9d02-c79cadfd47ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.343323802Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6ae77e2d-e256-4547-baae-08e19974503a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.344234864Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.344911841Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.345165423Z" level=info msg="Ran pod sandbox c2c10d450df76393c0730687102cb037fc0e2d2bd9d5532a2b1c287608ed32e8 with infra container: kube-system/kindnet-pk58m/POD" id=0045808a-209d-48de-9d02-c79cadfd47ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.345770034Z" level=info msg="Ran pod sandbox 12bc48478c348d3a77b92b0aef3fd96ec09b6e164d55df6ccf8572b37f7471cd with infra container: kube-system/kube-proxy-d866g/POD" id=6ae77e2d-e256-4547-baae-08e19974503a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.34653089Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=80e11120-8804-42c7-9f9d-b2b8515bfd11 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.347185404Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e2e34137-c394-4d5b-8bd0-2e3a3d3b3e7b name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.347498758Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=830ce0c1-26cc-4131-8732-c4d0361bfce9 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.348223106Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8d445468-3c45-4d3e-b49e-25b5d2d77e19 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.34864001Z" level=info msg="Creating container: kube-system/kindnet-pk58m/kindnet-cni" id=99f97772-a879-4496-95d5-cb4c65041246 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.34873663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.349167313Z" level=info msg="Creating container: kube-system/kube-proxy-d866g/kube-proxy" id=131f57b6-e665-4dae-a281-e59b6c39840a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.349293766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.353623342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.35427441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.356397288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.357023895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.384966837Z" level=info msg="Created container 8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a: kube-system/kindnet-pk58m/kindnet-cni" id=99f97772-a879-4496-95d5-cb4c65041246 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.385651103Z" level=info msg="Starting container: 8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a" id=78d2ed1c-dfcb-43a8-b10e-abecac821178 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.387724216Z" level=info msg="Started container" PID=1042 containerID=8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a description=kube-system/kindnet-pk58m/kindnet-cni id=78d2ed1c-dfcb-43a8-b10e-abecac821178 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2c10d450df76393c0730687102cb037fc0e2d2bd9d5532a2b1c287608ed32e8
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.388843122Z" level=info msg="Created container 3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465: kube-system/kube-proxy-d866g/kube-proxy" id=131f57b6-e665-4dae-a281-e59b6c39840a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.389399869Z" level=info msg="Starting container: 3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465" id=f2a82451-eac2-4896-b43b-9c2818ce21aa name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.392577894Z" level=info msg="Started container" PID=1043 containerID=3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465 description=kube-system/kube-proxy-d866g/kube-proxy id=f2a82451-eac2-4896-b43b-9c2818ce21aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=12bc48478c348d3a77b92b0aef3fd96ec09b6e164d55df6ccf8572b37f7471cd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3b441743d670c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   12bc48478c348       kube-proxy-d866g                            kube-system
	8e02b64166215       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   c2c10d450df76       kindnet-pk58m                               kube-system
	5c6f16a2765ac       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   17be29980ea17       kube-apiserver-newest-cni-290425            kube-system
	e2e676795ba20       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   6d1da6c6c5020       etcd-newest-cni-290425                      kube-system
	bcada78a58b8a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   1f4026dd5b015       kube-controller-manager-newest-cni-290425   kube-system
	54cf126c5f012       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   06a38e8593c07       kube-scheduler-newest-cni-290425            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-290425
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-290425
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=newest-cni-290425
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_39_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:39:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-290425
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:40:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:40:33 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:40:33 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:40:33 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 22:40:33 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-290425
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e391c1d9-7d95-420d-8069-436e90adb7af
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-290425                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-pk58m                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      37s
	  kube-system                 kube-apiserver-newest-cni-290425             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-newest-cni-290425    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-d866g                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-newest-cni-290425             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 36s              kube-proxy       
	  Normal  Starting                 4s               kube-proxy       
	  Normal  Starting                 43s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s              kubelet          Node newest-cni-290425 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s              kubelet          Node newest-cni-290425 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s              kubelet          Node newest-cni-290425 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s              node-controller  Node newest-cni-290425 event: Registered Node newest-cni-290425 in Controller
	  Normal  Starting                 8s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)  kubelet          Node newest-cni-290425 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)  kubelet          Node newest-cni-290425 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x8 over 8s)  kubelet          Node newest-cni-290425 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s               node-controller  Node newest-cni-290425 event: Registered Node newest-cni-290425 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [e2e676795ba20aae505a22108af3c33b27b2039e426adf854bbcfe4ed785f295] <==
	{"level":"warn","ts":"2025-10-27T22:40:32.636442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.642143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.648125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.662247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.668119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.674534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.681035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.688392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.694402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.701215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.709688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.716269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.721991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.728346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.733923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.739614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.745733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.751440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.756926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.769564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.775276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.793874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.799813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.805979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.851128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56876","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:40:39 up  2:22,  0 user,  load average: 4.65, 3.30, 2.95
	Linux newest-cni-290425 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a] <==
	I1027 22:40:34.522981       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:40:34.523209       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 22:40:34.523320       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:40:34.523337       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:40:34.523360       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:40:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:40:34.725793       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:40:34.725858       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:40:34.725876       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:40:34.726352       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 22:40:34.727064       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 22:40:34.726938       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 22:40:34.819811       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 22:40:34.820444       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 22:40:36.327307       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:40:36.327333       1 metrics.go:72] Registering metrics
	I1027 22:40:36.327385       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [5c6f16a2765ac4bdb8db042d29939ff67bdd1db836137d98bd170e7d1e41a727] <==
	I1027 22:40:33.334324       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 22:40:33.334391       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 22:40:33.335375       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 22:40:33.336124       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 22:40:33.336273       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 22:40:33.336294       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 22:40:33.337114       1 aggregator.go:171] initial CRD sync complete...
	I1027 22:40:33.337131       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 22:40:33.337139       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:40:33.337146       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:40:33.344229       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:40:33.359914       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:40:33.360803       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 22:40:33.641182       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:40:33.665718       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:40:33.682335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:40:33.689023       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:40:33.695094       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:40:33.724875       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.124.233"}
	I1027 22:40:33.733539       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.7.43"}
	I1027 22:40:34.238477       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:40:36.946927       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:40:37.097337       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:40:37.247791       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:40:37.247790       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [bcada78a58b8a8ca59f0601dbbe5b52ebef3f5b2e055602ea90f951529aca61f] <==
	I1027 22:40:36.693517       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:40:36.693535       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:40:36.693558       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:40:36.694729       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 22:40:36.695878       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 22:40:36.697571       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 22:40:36.703573       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 22:40:36.703585       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 22:40:36.703720       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:40:36.703586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:40:36.703816       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:40:36.703931       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:40:36.705062       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:40:36.703587       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:40:36.703576       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 22:40:36.704719       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:40:36.703602       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:40:36.706504       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:40:36.708979       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:40:36.712360       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:40:36.716733       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 22:40:36.725070       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:40:36.730999       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:40:36.731038       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 22:40:36.731060       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465] <==
	I1027 22:40:34.432315       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:40:34.502773       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:40:34.603602       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:40:34.603639       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 22:40:34.603760       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:40:34.623223       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:40:34.623273       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:40:34.628611       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:40:34.629036       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:40:34.629060       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:40:34.630322       1 config.go:200] "Starting service config controller"
	I1027 22:40:34.630351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:40:34.630451       1 config.go:309] "Starting node config controller"
	I1027 22:40:34.630465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:40:34.630445       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:40:34.630479       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:40:34.630491       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:40:34.630544       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:40:34.630558       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:40:34.730556       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:40:34.730719       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:40:34.730755       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [54cf126c5f01241f27207f3fdf1efb544769da6c6c0566f35c8387449126358d] <==
	I1027 22:40:32.216393       1 serving.go:386] Generated self-signed cert in-memory
	I1027 22:40:33.303792       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:40:33.303902       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:40:33.310430       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 22:40:33.310447       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:40:33.310465       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 22:40:33.310477       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:40:33.310526       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:40:33.310556       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:40:33.310835       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:40:33.310920       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:40:33.411083       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:40:33.411121       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 22:40:33.411110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.078780     665 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-290425\" not found" node="newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.335528     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.345915     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-290425\" already exists" pod="kube-system/etcd-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.345990     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.353913     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-290425\" already exists" pod="kube-system/kube-apiserver-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.353965     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.356437     665 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.356541     665 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.356580     665 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.357934     665 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.359923     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-290425\" already exists" pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.360006     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.365399     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-290425\" already exists" pod="kube-system/kube-scheduler-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.621427     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.635353     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-290425\" already exists" pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.031154     665 apiserver.go:52] "Watching apiserver"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.134620     665 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.161675     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba6a46e3-367b-40d2-a919-35b062379af3-xtables-lock\") pod \"kube-proxy-d866g\" (UID: \"ba6a46e3-367b-40d2-a919-35b062379af3\") " pod="kube-system/kube-proxy-d866g"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.161726     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12e1d8a7-de11-4047-85f7-4832c3a7e80c-xtables-lock\") pod \"kindnet-pk58m\" (UID: \"12e1d8a7-de11-4047-85f7-4832c3a7e80c\") " pod="kube-system/kindnet-pk58m"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.161750     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12e1d8a7-de11-4047-85f7-4832c3a7e80c-lib-modules\") pod \"kindnet-pk58m\" (UID: \"12e1d8a7-de11-4047-85f7-4832c3a7e80c\") " pod="kube-system/kindnet-pk58m"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.161768     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba6a46e3-367b-40d2-a919-35b062379af3-lib-modules\") pod \"kube-proxy-d866g\" (UID: \"ba6a46e3-367b-40d2-a919-35b062379af3\") " pod="kube-system/kube-proxy-d866g"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.161780     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/12e1d8a7-de11-4047-85f7-4832c3a7e80c-cni-cfg\") pod \"kindnet-pk58m\" (UID: \"12e1d8a7-de11-4047-85f7-4832c3a7e80c\") " pod="kube-system/kindnet-pk58m"
	Oct 27 22:40:36 newest-cni-290425 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:40:36 newest-cni-290425 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:40:36 newest-cni-290425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-290425 -n newest-cni-290425
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-290425 -n newest-cni-290425: exit status 2 (381.218734ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-290425 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-hmtz5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m6nlx kubernetes-dashboard-855c9754f9-c62x5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-290425 describe pod coredns-66bc5c9577-hmtz5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m6nlx kubernetes-dashboard-855c9754f9-c62x5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-290425 describe pod coredns-66bc5c9577-hmtz5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m6nlx kubernetes-dashboard-855c9754f9-c62x5: exit status 1 (76.847537ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-hmtz5" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-m6nlx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c62x5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-290425 describe pod coredns-66bc5c9577-hmtz5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m6nlx kubernetes-dashboard-855c9754f9-c62x5: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-290425
helpers_test.go:243: (dbg) docker inspect newest-cni-290425:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c",
	        "Created": "2025-10-27T22:39:37.68348506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 757058,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:40:24.710539295Z",
	            "FinishedAt": "2025-10-27T22:40:23.132901225Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/hostname",
	        "HostsPath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/hosts",
	        "LogPath": "/var/lib/docker/containers/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c/56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c-json.log",
	        "Name": "/newest-cni-290425",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-290425:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-290425",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "56a3b8496171589fef12443927baf3216a74d11ebfce920877634a27eb5ea57c",
	                "LowerDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3b408faabec5b809c7accef84c5ea04428c02d26d8b17595defbd10e5d0bde7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-290425",
	                "Source": "/var/lib/docker/volumes/newest-cni-290425/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-290425",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-290425",
	                "name.minikube.sigs.k8s.io": "newest-cni-290425",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3d04afb879a9bb2efb0328ea75d4db987654b1ef2f4a14765e315562ca19b797",
	            "SandboxKey": "/var/run/docker/netns/3d04afb879a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-290425": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:68:d9:d0:79:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "882fc6de2a096110b95ca3e32de921ddc1344df620994b742636f3034ae19fad",
	                    "EndpointID": "7cb628f5a72003d419d9837f248be14de0773ab0405f730e3157ac9ac19b883c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-290425",
	                        "56a3b8496171"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-290425 -n newest-cni-290425
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-290425 -n newest-cni-290425: exit status 2 (366.189087ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-290425 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-290425 logs -n 25: (1.096122863s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p default-k8s-diff-port-927034 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:39 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-290425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ stop    │ -p newest-cni-290425 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927034 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 pgrep -a kubelet                                                                                                                                                                                                               │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ addons  │ enable dashboard -p newest-cni-290425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ image   │ embed-certs-829976 image list --format=json                                                                                                                                                                                                   │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ pause   │ -p embed-certs-829976 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ image   │ newest-cni-290425 image list --format=json                                                                                                                                                                                                    │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ pause   │ -p newest-cni-290425 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo cat /etc/nsswitch.conf                                                                                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/hosts                                                                                                                                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/resolv.conf                                                                                                                                                                                                      │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo crictl pods                                                                                                                                                                                                               │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo crictl ps --all                                                                                                                                                                                                           │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo ip a s                                                                                                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo ip r s                                                                                                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo iptables-save                                                                                                                                                                                                             │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo iptables -t nat -L -n -v                                                                                                                                                                                                  │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ delete  │ -p embed-certs-829976                                                                                                                                                                                                                         │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo systemctl status kubelet --all --full --no-pager                                                                                                                                                                          │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo systemctl cat kubelet --no-pager                                                                                                                                                                                          │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:40:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:40:24.438209  756848 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:40:24.438329  756848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:24.438340  756848 out.go:374] Setting ErrFile to fd 2...
	I1027 22:40:24.438345  756848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:24.438673  756848 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:40:24.439297  756848 out.go:368] Setting JSON to false
	I1027 22:40:24.440841  756848 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8563,"bootTime":1761596261,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:40:24.440961  756848 start.go:143] virtualization: kvm guest
	I1027 22:40:24.442921  756848 out.go:179] * [newest-cni-290425] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:40:24.445592  756848 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:40:24.445629  756848 notify.go:221] Checking for updates...
	I1027 22:40:24.448124  756848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:40:24.449565  756848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:24.451090  756848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:40:24.452160  756848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:40:24.456462  756848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:40:24.458338  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:24.459094  756848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:40:24.488803  756848 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:40:24.488892  756848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:24.557828  756848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:40:24.546418647 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:24.557998  756848 docker.go:318] overlay module found
	I1027 22:40:24.559462  756848 out.go:179] * Using the docker driver based on existing profile
	I1027 22:40:24.560558  756848 start.go:307] selected driver: docker
	I1027 22:40:24.560578  756848 start.go:928] validating driver "docker" against &{Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:24.560718  756848 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:40:24.561602  756848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:24.632177  756848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:40:24.620016626 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:24.632569  756848 start_flags.go:1010] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 22:40:24.632600  756848 cni.go:84] Creating CNI manager for ""
	I1027 22:40:24.632673  756848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:40:24.632732  756848 start.go:351] cluster config:
	{Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:24.634382  756848 out.go:179] * Starting "newest-cni-290425" primary control-plane node in "newest-cni-290425" cluster
	I1027 22:40:24.635369  756848 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:40:24.636382  756848 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:40:24.637272  756848 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:24.637317  756848 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:40:24.637329  756848 cache.go:59] Caching tarball of preloaded images
	I1027 22:40:24.637336  756848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:40:24.637435  756848 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:40:24.637450  756848 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:40:24.637576  756848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/config.json ...
	I1027 22:40:24.659489  756848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:40:24.659511  756848 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:40:24.659527  756848 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:40:24.659550  756848 start.go:360] acquireMachinesLock for newest-cni-290425: {Name:mk4e0aa51aaa1a604f2ac1e14d4e9ad4994a6e85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:40:24.659621  756848 start.go:364] duration metric: took 41.13µs to acquireMachinesLock for "newest-cni-290425"
	I1027 22:40:24.659640  756848 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:40:24.659645  756848 fix.go:55] fixHost starting: 
	I1027 22:40:24.659871  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:24.679073  756848 fix.go:113] recreateIfNeeded on newest-cni-290425: state=Stopped err=<nil>
	W1027 22:40:24.679130  756848 fix.go:139] unexpected machine state, will restart: <nil>
	W1027 22:40:24.188623  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:26.687852  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:24.681022  756848 out.go:252] * Restarting existing docker container for "newest-cni-290425" ...
	I1027 22:40:24.681102  756848 cli_runner.go:164] Run: docker start newest-cni-290425
	I1027 22:40:24.992255  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:25.014514  756848 kic.go:430] container "newest-cni-290425" state is running.
	I1027 22:40:25.015046  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:25.038668  756848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/config.json ...
	I1027 22:40:25.038987  756848 machine.go:94] provisionDockerMachine start ...
	I1027 22:40:25.039099  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:25.061826  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:25.062260  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:25.062285  756848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:40:25.063188  756848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45146->127.0.0.1:33103: read: connection reset by peer
	I1027 22:40:28.204458  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:40:28.204492  756848 ubuntu.go:182] provisioning hostname "newest-cni-290425"
	I1027 22:40:28.204559  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.222514  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.222737  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.222759  756848 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-290425 && echo "newest-cni-290425" | sudo tee /etc/hostname
	I1027 22:40:28.375236  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-290425
	
	I1027 22:40:28.375318  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.392770  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.393063  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.393082  756848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-290425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-290425/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-290425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:40:28.533683  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:40:28.533712  756848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:40:28.533740  756848 ubuntu.go:190] setting up certificates
	I1027 22:40:28.533756  756848 provision.go:84] configureAuth start
	I1027 22:40:28.533832  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:28.551092  756848 provision.go:143] copyHostCerts
	I1027 22:40:28.551157  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:40:28.551183  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:40:28.551262  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:40:28.551424  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:40:28.551439  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:40:28.551489  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:40:28.551578  756848 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:40:28.551589  756848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:40:28.551627  756848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:40:28.551720  756848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.newest-cni-290425 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-290425]
	I1027 22:40:28.786512  756848 provision.go:177] copyRemoteCerts
	I1027 22:40:28.786589  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:40:28.786645  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.804351  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:28.905399  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:40:28.923296  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:40:28.940336  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:40:28.958758  756848 provision.go:87] duration metric: took 424.98667ms to configureAuth
	I1027 22:40:28.958786  756848 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:40:28.959034  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:28.959153  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:28.977021  756848 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:28.977337  756848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1027 22:40:28.977362  756848 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:40:29.254601  756848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:40:29.254630  756848 machine.go:97] duration metric: took 4.215620835s to provisionDockerMachine
	I1027 22:40:29.254645  756848 start.go:293] postStartSetup for "newest-cni-290425" (driver="docker")
	I1027 22:40:29.254658  756848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:40:29.254744  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:40:29.254799  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.272656  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.373656  756848 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:40:29.377346  756848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:40:29.377381  756848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:40:29.377394  756848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:40:29.377439  756848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:40:29.377507  756848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:40:29.377598  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:40:29.385749  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:29.403339  756848 start.go:296] duration metric: took 148.678819ms for postStartSetup
	I1027 22:40:29.403416  756848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:40:29.403473  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.421865  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.520183  756848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:40:29.524936  756848 fix.go:57] duration metric: took 4.865280599s for fixHost
	I1027 22:40:29.524989  756848 start.go:83] releasing machines lock for "newest-cni-290425", held for 4.865355811s
	I1027 22:40:29.525055  756848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-290425
	I1027 22:40:29.542221  756848 ssh_runner.go:195] Run: cat /version.json
	I1027 22:40:29.542269  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.542325  756848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:40:29.542380  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:29.560078  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.560376  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:29.658503  756848 ssh_runner.go:195] Run: systemctl --version
	I1027 22:40:29.714758  756848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:40:29.751819  756848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:40:29.757527  756848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:40:29.757592  756848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:40:29.766082  756848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:40:29.766107  756848 start.go:496] detecting cgroup driver to use...
	I1027 22:40:29.766144  756848 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:40:29.766201  756848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:40:29.782220  756848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:40:29.795704  756848 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:40:29.795756  756848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:40:29.811814  756848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:40:29.824770  756848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:40:29.911398  756848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:40:30.002621  756848 docker.go:234] disabling docker service ...
	I1027 22:40:30.002705  756848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:40:30.018425  756848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:40:30.032066  756848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:40:30.126259  756848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:40:30.224136  756848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:40:30.240695  756848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:40:30.262231  756848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:40:30.262309  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.272017  756848 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:40:30.272077  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.281097  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.290459  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.299765  756848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:40:30.308783  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.318037  756848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.326660  756848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:30.335545  756848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:40:30.343816  756848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:40:30.351923  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:30.438807  756848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:40:30.541588  756848 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:40:30.541647  756848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:40:30.545709  756848 start.go:564] Will wait 60s for crictl version
	I1027 22:40:30.545763  756848 ssh_runner.go:195] Run: which crictl
	I1027 22:40:30.549390  756848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:40:30.574840  756848 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:40:30.574912  756848 ssh_runner.go:195] Run: crio --version
	I1027 22:40:30.603907  756848 ssh_runner.go:195] Run: crio --version
	I1027 22:40:30.635251  756848 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:40:30.636309  756848 cli_runner.go:164] Run: docker network inspect newest-cni-290425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:40:30.652517  756848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 22:40:30.656856  756848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:30.668683  756848 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1027 22:40:30.669554  756848 kubeadm.go:884] updating cluster {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:40:30.669731  756848 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:30.669822  756848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:30.704544  756848 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:30.704566  756848 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:40:30.704611  756848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:30.734075  756848 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:30.734098  756848 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:40:30.734106  756848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 22:40:30.734202  756848 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-290425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:40:30.734273  756848 ssh_runner.go:195] Run: crio config
	I1027 22:40:30.780046  756848 cni.go:84] Creating CNI manager for ""
	I1027 22:40:30.780067  756848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:40:30.780090  756848 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 22:40:30.780113  756848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-290425 NodeName:newest-cni-290425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:40:30.780240  756848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-290425"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:40:30.780304  756848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:40:30.788709  756848 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:40:30.788776  756848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:40:30.796691  756848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 22:40:30.809324  756848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:40:30.821977  756848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1027 22:40:30.834850  756848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:40:30.838629  756848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:30.848598  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:30.930756  756848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:30.960505  756848 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425 for IP: 192.168.76.2
	I1027 22:40:30.960526  756848 certs.go:195] generating shared ca certs ...
	I1027 22:40:30.960549  756848 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:30.960716  756848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:40:30.960760  756848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:40:30.960770  756848 certs.go:257] generating profile certs ...
	I1027 22:40:30.960854  756848 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/client.key
	I1027 22:40:30.960928  756848 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key.46af5a67
	I1027 22:40:30.961028  756848 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key
	I1027 22:40:30.961171  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:40:30.961204  756848 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:40:30.961217  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:40:30.961254  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:40:30.961289  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:40:30.961318  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:40:30.961382  756848 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:30.962311  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:40:30.982191  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:40:31.003485  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:40:31.024750  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:40:31.051339  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 22:40:31.070810  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:40:31.089035  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:40:31.107252  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/newest-cni-290425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:40:31.124793  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:40:31.142653  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:40:31.162599  756848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:40:31.180139  756848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:40:31.194578  756848 ssh_runner.go:195] Run: openssl version
	I1027 22:40:31.200775  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:40:31.210145  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.214047  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.214105  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:31.252428  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:40:31.261073  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:40:31.270127  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.274120  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.274183  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:40:31.309111  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:40:31.317698  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:40:31.326420  756848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.330243  756848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.330307  756848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:40:31.365724  756848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:40:31.374331  756848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:40:31.378340  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:40:31.413065  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:40:31.448812  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:40:31.492414  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:40:31.536913  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:40:31.581567  756848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:40:31.637412  756848 kubeadm.go:401] StartCluster: {Name:newest-cni-290425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-290425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:31.637550  756848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:40:31.637610  756848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:40:31.673955  756848 cri.go:89] found id: "5c6f16a2765ac4bdb8db042d29939ff67bdd1db836137d98bd170e7d1e41a727"
	I1027 22:40:31.673983  756848 cri.go:89] found id: "e2e676795ba20aae505a22108af3c33b27b2039e426adf854bbcfe4ed785f295"
	I1027 22:40:31.673988  756848 cri.go:89] found id: "bcada78a58b8a8ca59f0601dbbe5b52ebef3f5b2e055602ea90f951529aca61f"
	I1027 22:40:31.673993  756848 cri.go:89] found id: "54cf126c5f01241f27207f3fdf1efb544769da6c6c0566f35c8387449126358d"
	I1027 22:40:31.673996  756848 cri.go:89] found id: ""
	I1027 22:40:31.674047  756848 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:40:31.687812  756848 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:40:31Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:40:31.687887  756848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:40:31.697214  756848 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:40:31.697231  756848 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:40:31.697274  756848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:40:31.705188  756848 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:40:31.706218  756848 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-290425" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:31.706815  756848 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-482142/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-290425" cluster setting kubeconfig missing "newest-cni-290425" context setting]
	I1027 22:40:31.708077  756848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.710194  756848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:40:31.719725  756848 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 22:40:31.719756  756848 kubeadm.go:602] duration metric: took 22.519377ms to restartPrimaryControlPlane
	I1027 22:40:31.719767  756848 kubeadm.go:403] duration metric: took 82.367104ms to StartCluster
	I1027 22:40:31.719783  756848 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.719848  756848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:31.722417  756848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:31.722691  756848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:40:31.722773  756848 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:40:31.722874  756848 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-290425"
	I1027 22:40:31.722893  756848 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-290425"
	W1027 22:40:31.722902  756848 addons.go:247] addon storage-provisioner should already be in state true
	I1027 22:40:31.722931  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.722937  756848 addons.go:69] Setting dashboard=true in profile "newest-cni-290425"
	I1027 22:40:31.722973  756848 addons.go:238] Setting addon dashboard=true in "newest-cni-290425"
	W1027 22:40:31.722982  756848 addons.go:247] addon dashboard should already be in state true
	I1027 22:40:31.722987  756848 config.go:182] Loaded profile config "newest-cni-290425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:31.723014  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.723047  756848 addons.go:69] Setting default-storageclass=true in profile "newest-cni-290425"
	I1027 22:40:31.723064  756848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-290425"
	I1027 22:40:31.723353  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.723550  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.723800  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.725730  756848 out.go:179] * Verifying Kubernetes components...
	I1027 22:40:31.726934  756848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:31.749813  756848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:40:31.749929  756848 addons.go:238] Setting addon default-storageclass=true in "newest-cni-290425"
	W1027 22:40:31.749966  756848 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:40:31.750012  756848 host.go:66] Checking if "newest-cni-290425" exists ...
	I1027 22:40:31.750560  756848 cli_runner.go:164] Run: docker container inspect newest-cni-290425 --format={{.State.Status}}
	I1027 22:40:31.750761  756848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 22:40:31.750784  756848 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:40:31.750805  756848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:40:31.750863  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.756414  756848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1027 22:40:29.188109  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:31.188378  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:31.757286  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 22:40:31.757307  756848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 22:40:31.757368  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.788482  756848 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:40:31.788523  756848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:40:31.788585  756848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-290425
	I1027 22:40:31.788473  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.791269  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.812300  756848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/newest-cni-290425/id_rsa Username:docker}
	I1027 22:40:31.876427  756848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:31.890087  756848 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:40:31.890171  756848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:40:31.904610  756848 api_server.go:72] duration metric: took 181.883596ms to wait for apiserver process to appear ...
	I1027 22:40:31.904641  756848 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:40:31.904675  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:31.911250  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:40:31.913745  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 22:40:31.913771  756848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 22:40:31.928922  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 22:40:31.928985  756848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 22:40:31.937602  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:40:31.944700  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 22:40:31.944729  756848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 22:40:31.965934  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 22:40:31.965991  756848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 22:40:31.983504  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 22:40:31.983534  756848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 22:40:32.000875  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 22:40:32.000897  756848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 22:40:32.015058  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 22:40:32.015175  756848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 22:40:32.028828  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 22:40:32.028864  756848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 22:40:32.042288  756848 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:40:32.042313  756848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 22:40:32.055615  756848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 22:40:33.250603  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:40:33.250644  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:40:33.250661  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.259803  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:40:33.259841  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:40:33.405243  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.410997  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:33.411027  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:33.838488  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.927200802s)
	I1027 22:40:33.838547  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.900910209s)
	I1027 22:40:33.838682  756848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.783022367s)
	I1027 22:40:33.840157  756848 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-290425 addons enable metrics-server
	
	I1027 22:40:33.849803  756848 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 22:40:33.851036  756848 addons.go:514] duration metric: took 2.128273879s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 22:40:33.905759  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:33.909856  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:33.909880  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:34.405178  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:34.409922  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:40:34.409969  756848 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:40:34.905382  756848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 22:40:34.910198  756848 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 22:40:34.911208  756848 api_server.go:141] control plane version: v1.34.1
	I1027 22:40:34.911251  756848 api_server.go:131] duration metric: took 3.006601962s to wait for apiserver health ...
	I1027 22:40:34.911260  756848 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:40:34.915094  756848 system_pods.go:59] 8 kube-system pods found
	I1027 22:40:34.915146  756848 system_pods.go:61] "coredns-66bc5c9577-hmtz5" [d0253fb1-e66b-448e-8b6d-e9882120ffd2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:40:34.915160  756848 system_pods.go:61] "etcd-newest-cni-290425" [fa08a886-4040-46e0-9e58-975345432c48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:40:34.915178  756848 system_pods.go:61] "kindnet-pk58m" [12e1d8a7-de11-4047-85f7-4832c3a7e80c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 22:40:34.915190  756848 system_pods.go:61] "kube-apiserver-newest-cni-290425" [36218ab8-7cc4-4487-9dcd-5186adc9d4c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:40:34.915203  756848 system_pods.go:61] "kube-controller-manager-newest-cni-290425" [494bc2f7-8ec5-40bb-bd19-0c4a96b93532] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:40:34.915217  756848 system_pods.go:61] "kube-proxy-d866g" [ba6a46e3-367b-40d2-a919-35b062379af3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 22:40:34.915235  756848 system_pods.go:61] "kube-scheduler-newest-cni-290425" [69cd3450-9c48-455d-9bc0-b8f45eeb37c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:40:34.915246  756848 system_pods.go:61] "storage-provisioner" [d8b271bc-46b6-4d99-a6a2-27907f5afc55] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 22:40:34.915256  756848 system_pods.go:74] duration metric: took 3.987353ms to wait for pod list to return data ...
	I1027 22:40:34.915270  756848 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:40:34.917715  756848 default_sa.go:45] found service account: "default"
	I1027 22:40:34.917735  756848 default_sa.go:55] duration metric: took 2.459034ms for default service account to be created ...
	I1027 22:40:34.917746  756848 kubeadm.go:587] duration metric: took 3.195028043s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 22:40:34.917762  756848 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:40:34.920111  756848 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 22:40:34.920150  756848 node_conditions.go:123] node cpu capacity is 8
	I1027 22:40:34.920168  756848 node_conditions.go:105] duration metric: took 2.398457ms to run NodePressure ...
	I1027 22:40:34.920187  756848 start.go:242] waiting for startup goroutines ...
	I1027 22:40:34.920198  756848 start.go:247] waiting for cluster config update ...
	I1027 22:40:34.920210  756848 start.go:256] writing updated cluster config ...
	I1027 22:40:34.920542  756848 ssh_runner.go:195] Run: rm -f paused
	I1027 22:40:34.975966  756848 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:40:34.978357  756848 out.go:179] * Done! kubectl is now configured to use "newest-cni-290425" cluster and "default" namespace by default
	W1027 22:40:33.190344  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:35.689191  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.339027833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.342465456Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0045808a-209d-48de-9d02-c79cadfd47ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.343323802Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6ae77e2d-e256-4547-baae-08e19974503a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.344234864Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.344911841Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.345165423Z" level=info msg="Ran pod sandbox c2c10d450df76393c0730687102cb037fc0e2d2bd9d5532a2b1c287608ed32e8 with infra container: kube-system/kindnet-pk58m/POD" id=0045808a-209d-48de-9d02-c79cadfd47ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.345770034Z" level=info msg="Ran pod sandbox 12bc48478c348d3a77b92b0aef3fd96ec09b6e164d55df6ccf8572b37f7471cd with infra container: kube-system/kube-proxy-d866g/POD" id=6ae77e2d-e256-4547-baae-08e19974503a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.34653089Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=80e11120-8804-42c7-9f9d-b2b8515bfd11 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.347185404Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e2e34137-c394-4d5b-8bd0-2e3a3d3b3e7b name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.347498758Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=830ce0c1-26cc-4131-8732-c4d0361bfce9 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.348223106Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8d445468-3c45-4d3e-b49e-25b5d2d77e19 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.34864001Z" level=info msg="Creating container: kube-system/kindnet-pk58m/kindnet-cni" id=99f97772-a879-4496-95d5-cb4c65041246 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.34873663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.349167313Z" level=info msg="Creating container: kube-system/kube-proxy-d866g/kube-proxy" id=131f57b6-e665-4dae-a281-e59b6c39840a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.349293766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.353623342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.35427441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.356397288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.357023895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.384966837Z" level=info msg="Created container 8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a: kube-system/kindnet-pk58m/kindnet-cni" id=99f97772-a879-4496-95d5-cb4c65041246 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.385651103Z" level=info msg="Starting container: 8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a" id=78d2ed1c-dfcb-43a8-b10e-abecac821178 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.387724216Z" level=info msg="Started container" PID=1042 containerID=8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a description=kube-system/kindnet-pk58m/kindnet-cni id=78d2ed1c-dfcb-43a8-b10e-abecac821178 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2c10d450df76393c0730687102cb037fc0e2d2bd9d5532a2b1c287608ed32e8
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.388843122Z" level=info msg="Created container 3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465: kube-system/kube-proxy-d866g/kube-proxy" id=131f57b6-e665-4dae-a281-e59b6c39840a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.389399869Z" level=info msg="Starting container: 3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465" id=f2a82451-eac2-4896-b43b-9c2818ce21aa name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:34 newest-cni-290425 crio[517]: time="2025-10-27T22:40:34.392577894Z" level=info msg="Started container" PID=1043 containerID=3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465 description=kube-system/kube-proxy-d866g/kube-proxy id=f2a82451-eac2-4896-b43b-9c2818ce21aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=12bc48478c348d3a77b92b0aef3fd96ec09b6e164d55df6ccf8572b37f7471cd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3b441743d670c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   12bc48478c348       kube-proxy-d866g                            kube-system
	8e02b64166215       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   c2c10d450df76       kindnet-pk58m                               kube-system
	5c6f16a2765ac       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   17be29980ea17       kube-apiserver-newest-cni-290425            kube-system
	e2e676795ba20       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   6d1da6c6c5020       etcd-newest-cni-290425                      kube-system
	bcada78a58b8a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   1f4026dd5b015       kube-controller-manager-newest-cni-290425   kube-system
	54cf126c5f012       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   06a38e8593c07       kube-scheduler-newest-cni-290425            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-290425
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-290425
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=newest-cni-290425
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_39_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:39:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-290425
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:40:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:40:33 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:40:33 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:40:33 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 22:40:33 +0000   Mon, 27 Oct 2025 22:39:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-290425
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e391c1d9-7d95-420d-8069-436e90adb7af
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-290425                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         45s
	  kube-system                 kindnet-pk58m                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      39s
	  kube-system                 kube-apiserver-newest-cni-290425             250m (3%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-newest-cni-290425    200m (2%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-d866g                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-scheduler-newest-cni-290425             100m (1%)     0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s                kubelet          Node newest-cni-290425 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s                kubelet          Node newest-cni-290425 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s                kubelet          Node newest-cni-290425 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node newest-cni-290425 event: Registered Node newest-cni-290425 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node newest-cni-290425 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node newest-cni-290425 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x8 over 10s)  kubelet          Node newest-cni-290425 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-290425 event: Registered Node newest-cni-290425 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [e2e676795ba20aae505a22108af3c33b27b2039e426adf854bbcfe4ed785f295] <==
	{"level":"warn","ts":"2025-10-27T22:40:32.636442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.642143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.648125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.662247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.668119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.674534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.681035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.688392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.694402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.701215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.709688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.716269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.721991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.728346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.733923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.739614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.745733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.751440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.756926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.769564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.775276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.793874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.799813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.805979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:32.851128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56876","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:40:41 up  2:22,  0 user,  load average: 4.65, 3.30, 2.95
	Linux newest-cni-290425 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8e02b64166215f41823da5fbe4f6afa969cc83107828186641bc4fa1415a141a] <==
	I1027 22:40:34.522981       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:40:34.523209       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 22:40:34.523320       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:40:34.523337       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:40:34.523360       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:40:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:40:34.725793       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:40:34.725858       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:40:34.725876       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:40:34.726352       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 22:40:34.727064       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 22:40:34.726938       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 22:40:34.819811       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 22:40:34.820444       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 22:40:36.327307       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:40:36.327333       1 metrics.go:72] Registering metrics
	I1027 22:40:36.327385       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [5c6f16a2765ac4bdb8db042d29939ff67bdd1db836137d98bd170e7d1e41a727] <==
	I1027 22:40:33.334324       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 22:40:33.334391       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 22:40:33.335375       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 22:40:33.336124       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 22:40:33.336273       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 22:40:33.336294       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 22:40:33.337114       1 aggregator.go:171] initial CRD sync complete...
	I1027 22:40:33.337131       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 22:40:33.337139       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:40:33.337146       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:40:33.344229       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:40:33.359914       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:40:33.360803       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 22:40:33.641182       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:40:33.665718       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:40:33.682335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:40:33.689023       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:40:33.695094       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:40:33.724875       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.124.233"}
	I1027 22:40:33.733539       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.7.43"}
	I1027 22:40:34.238477       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:40:36.946927       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:40:37.097337       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:40:37.247791       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:40:37.247790       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [bcada78a58b8a8ca59f0601dbbe5b52ebef3f5b2e055602ea90f951529aca61f] <==
	I1027 22:40:36.693517       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:40:36.693535       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:40:36.693558       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:40:36.694729       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 22:40:36.695878       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 22:40:36.697571       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 22:40:36.703573       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 22:40:36.703585       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 22:40:36.703720       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:40:36.703586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:40:36.703816       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:40:36.703931       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:40:36.705062       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:40:36.703587       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:40:36.703576       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 22:40:36.704719       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:40:36.703602       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:40:36.706504       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:40:36.708979       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:40:36.712360       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:40:36.716733       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 22:40:36.725070       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:40:36.730999       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:40:36.731038       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 22:40:36.731060       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [3b441743d670cb58825f2f393d4fa275e91d8b9a56aa0c4a8132843e0ae93465] <==
	I1027 22:40:34.432315       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:40:34.502773       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:40:34.603602       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:40:34.603639       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 22:40:34.603760       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:40:34.623223       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:40:34.623273       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:40:34.628611       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:40:34.629036       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:40:34.629060       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:40:34.630322       1 config.go:200] "Starting service config controller"
	I1027 22:40:34.630351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:40:34.630451       1 config.go:309] "Starting node config controller"
	I1027 22:40:34.630465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:40:34.630445       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:40:34.630479       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:40:34.630491       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:40:34.630544       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:40:34.630558       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:40:34.730556       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:40:34.730719       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:40:34.730755       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [54cf126c5f01241f27207f3fdf1efb544769da6c6c0566f35c8387449126358d] <==
	I1027 22:40:32.216393       1 serving.go:386] Generated self-signed cert in-memory
	I1027 22:40:33.303792       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:40:33.303902       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:40:33.310430       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 22:40:33.310447       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:40:33.310465       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 22:40:33.310477       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:40:33.310526       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:40:33.310556       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:40:33.310835       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:40:33.310920       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:40:33.411083       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:40:33.411121       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 22:40:33.411110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.078780     665 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-290425\" not found" node="newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.335528     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.345915     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-290425\" already exists" pod="kube-system/etcd-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.345990     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.353913     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-290425\" already exists" pod="kube-system/kube-apiserver-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.353965     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.356437     665 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.356541     665 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.356580     665 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.357934     665 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.359923     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-290425\" already exists" pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.360006     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.365399     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-290425\" already exists" pod="kube-system/kube-scheduler-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: I1027 22:40:33.621427     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:40:33 newest-cni-290425 kubelet[665]: E1027 22:40:33.635353     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-290425\" already exists" pod="kube-system/kube-controller-manager-newest-cni-290425"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.031154     665 apiserver.go:52] "Watching apiserver"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.134620     665 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.161675     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba6a46e3-367b-40d2-a919-35b062379af3-xtables-lock\") pod \"kube-proxy-d866g\" (UID: \"ba6a46e3-367b-40d2-a919-35b062379af3\") " pod="kube-system/kube-proxy-d866g"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.161726     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12e1d8a7-de11-4047-85f7-4832c3a7e80c-xtables-lock\") pod \"kindnet-pk58m\" (UID: \"12e1d8a7-de11-4047-85f7-4832c3a7e80c\") " pod="kube-system/kindnet-pk58m"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.161750     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12e1d8a7-de11-4047-85f7-4832c3a7e80c-lib-modules\") pod \"kindnet-pk58m\" (UID: \"12e1d8a7-de11-4047-85f7-4832c3a7e80c\") " pod="kube-system/kindnet-pk58m"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.161768     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba6a46e3-367b-40d2-a919-35b062379af3-lib-modules\") pod \"kube-proxy-d866g\" (UID: \"ba6a46e3-367b-40d2-a919-35b062379af3\") " pod="kube-system/kube-proxy-d866g"
	Oct 27 22:40:34 newest-cni-290425 kubelet[665]: I1027 22:40:34.161780     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/12e1d8a7-de11-4047-85f7-4832c3a7e80c-cni-cfg\") pod \"kindnet-pk58m\" (UID: \"12e1d8a7-de11-4047-85f7-4832c3a7e80c\") " pod="kube-system/kindnet-pk58m"
	Oct 27 22:40:36 newest-cni-290425 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:40:36 newest-cni-290425 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:40:36 newest-cni-290425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-290425 -n newest-cni-290425
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-290425 -n newest-cni-290425: exit status 2 (395.006955ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-290425 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-hmtz5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m6nlx kubernetes-dashboard-855c9754f9-c62x5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-290425 describe pod coredns-66bc5c9577-hmtz5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m6nlx kubernetes-dashboard-855c9754f9-c62x5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-290425 describe pod coredns-66bc5c9577-hmtz5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m6nlx kubernetes-dashboard-855c9754f9-c62x5: exit status 1 (67.664432ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-hmtz5" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-m6nlx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c62x5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-290425 describe pod coredns-66bc5c9577-hmtz5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m6nlx kubernetes-dashboard-855c9754f9-c62x5: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-927034 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-927034 --alsologtostderr -v=1: exit status 80 (2.222502927s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-927034 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:41:09.915914  773966 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:41:09.916257  773966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:41:09.916265  773966 out.go:374] Setting ErrFile to fd 2...
	I1027 22:41:09.916271  773966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:41:09.916604  773966 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:41:09.916965  773966 out.go:368] Setting JSON to false
	I1027 22:41:09.917025  773966 mustload.go:66] Loading cluster: default-k8s-diff-port-927034
	I1027 22:41:09.917383  773966 config.go:182] Loaded profile config "default-k8s-diff-port-927034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:41:09.917991  773966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927034 --format={{.State.Status}}
	I1027 22:41:09.944232  773966 host.go:66] Checking if "default-k8s-diff-port-927034" exists ...
	I1027 22:41:09.944592  773966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:41:10.003172  773966 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:83 OomKillDisable:false NGoroutines:89 SystemTime:2025-10-27 22:41:09.992252191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:41:10.003807  773966 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-927034 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 22:41:10.006452  773966 out.go:179] * Pausing node default-k8s-diff-port-927034 ... 
	I1027 22:41:10.007470  773966 host.go:66] Checking if "default-k8s-diff-port-927034" exists ...
	I1027 22:41:10.007729  773966 ssh_runner.go:195] Run: systemctl --version
	I1027 22:41:10.007768  773966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927034
	I1027 22:41:10.024543  773966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/default-k8s-diff-port-927034/id_rsa Username:docker}
	I1027 22:41:10.129669  773966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:41:10.159073  773966 pause.go:52] kubelet running: true
	I1027 22:41:10.159150  773966 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:41:10.345844  773966 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:41:10.345937  773966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:41:10.420419  773966 cri.go:89] found id: "827e84f1fab22b15e97cd49ea5930dc974a7849de6da28521576edd02930da17"
	I1027 22:41:10.420441  773966 cri.go:89] found id: "dd925db2f94fb591e9c7cb190ecb837b75758b86b30152040595a82ecd10fac3"
	I1027 22:41:10.420446  773966 cri.go:89] found id: "941141ecdf5542a303eff7ec706390c2f855de75447f8261b3667f38a2495d01"
	I1027 22:41:10.420449  773966 cri.go:89] found id: "dddf4daea9020cf289743053ebca403400a4f7513ff226a3edfb5fc2caf01a72"
	I1027 22:41:10.420457  773966 cri.go:89] found id: "ababe86c36b425bd0273434f7b483138971716fbdf50f44c100e55918006dcfb"
	I1027 22:41:10.420461  773966 cri.go:89] found id: "9cda36d13a02141502e61a8f0bd69b14fb79ac20826af4e9365b17402d4e4467"
	I1027 22:41:10.420463  773966 cri.go:89] found id: "a73ac42016306256e53333754b058b687911ab56a58a53efba33e2650ed7f3c4"
	I1027 22:41:10.420466  773966 cri.go:89] found id: "341e84318f679f97a704241f45d9cfde3d9e2e8695ec44c4ff77dcb1b0fb2385"
	I1027 22:41:10.420468  773966 cri.go:89] found id: "844da32e0557faa56becf52073bd2e1d4107c6dcd6a6994bf7b807ec687a20df"
	I1027 22:41:10.420475  773966 cri.go:89] found id: "78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d"
	I1027 22:41:10.420477  773966 cri.go:89] found id: "943e0d285e380306579142f00ea866adbc1a6d3e36fe8de0c8f3a0cfa6d58fda"
	I1027 22:41:10.420480  773966 cri.go:89] found id: ""
	I1027 22:41:10.420517  773966 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:41:10.433106  773966 retry.go:31] will retry after 371.953009ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:41:10Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:41:10.805823  773966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:41:10.826632  773966 pause.go:52] kubelet running: false
	I1027 22:41:10.826709  773966 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:41:11.070461  773966 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:41:11.070627  773966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:41:11.195115  773966 cri.go:89] found id: "827e84f1fab22b15e97cd49ea5930dc974a7849de6da28521576edd02930da17"
	I1027 22:41:11.195178  773966 cri.go:89] found id: "dd925db2f94fb591e9c7cb190ecb837b75758b86b30152040595a82ecd10fac3"
	I1027 22:41:11.195185  773966 cri.go:89] found id: "941141ecdf5542a303eff7ec706390c2f855de75447f8261b3667f38a2495d01"
	I1027 22:41:11.195191  773966 cri.go:89] found id: "dddf4daea9020cf289743053ebca403400a4f7513ff226a3edfb5fc2caf01a72"
	I1027 22:41:11.195195  773966 cri.go:89] found id: "ababe86c36b425bd0273434f7b483138971716fbdf50f44c100e55918006dcfb"
	I1027 22:41:11.195224  773966 cri.go:89] found id: "9cda36d13a02141502e61a8f0bd69b14fb79ac20826af4e9365b17402d4e4467"
	I1027 22:41:11.195235  773966 cri.go:89] found id: "a73ac42016306256e53333754b058b687911ab56a58a53efba33e2650ed7f3c4"
	I1027 22:41:11.195239  773966 cri.go:89] found id: "341e84318f679f97a704241f45d9cfde3d9e2e8695ec44c4ff77dcb1b0fb2385"
	I1027 22:41:11.195244  773966 cri.go:89] found id: "844da32e0557faa56becf52073bd2e1d4107c6dcd6a6994bf7b807ec687a20df"
	I1027 22:41:11.195267  773966 cri.go:89] found id: "78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d"
	I1027 22:41:11.195272  773966 cri.go:89] found id: "943e0d285e380306579142f00ea866adbc1a6d3e36fe8de0c8f3a0cfa6d58fda"
	I1027 22:41:11.195276  773966 cri.go:89] found id: ""
	I1027 22:41:11.195394  773966 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:41:11.214451  773966 retry.go:31] will retry after 479.994009ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:41:11Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:41:11.695182  773966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:41:11.722692  773966 pause.go:52] kubelet running: false
	I1027 22:41:11.722774  773966 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 22:41:11.935699  773966 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 22:41:11.935797  773966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 22:41:12.034604  773966 cri.go:89] found id: "827e84f1fab22b15e97cd49ea5930dc974a7849de6da28521576edd02930da17"
	I1027 22:41:12.034635  773966 cri.go:89] found id: "dd925db2f94fb591e9c7cb190ecb837b75758b86b30152040595a82ecd10fac3"
	I1027 22:41:12.034641  773966 cri.go:89] found id: "941141ecdf5542a303eff7ec706390c2f855de75447f8261b3667f38a2495d01"
	I1027 22:41:12.034646  773966 cri.go:89] found id: "dddf4daea9020cf289743053ebca403400a4f7513ff226a3edfb5fc2caf01a72"
	I1027 22:41:12.034650  773966 cri.go:89] found id: "ababe86c36b425bd0273434f7b483138971716fbdf50f44c100e55918006dcfb"
	I1027 22:41:12.034661  773966 cri.go:89] found id: "9cda36d13a02141502e61a8f0bd69b14fb79ac20826af4e9365b17402d4e4467"
	I1027 22:41:12.034666  773966 cri.go:89] found id: "a73ac42016306256e53333754b058b687911ab56a58a53efba33e2650ed7f3c4"
	I1027 22:41:12.034670  773966 cri.go:89] found id: "341e84318f679f97a704241f45d9cfde3d9e2e8695ec44c4ff77dcb1b0fb2385"
	I1027 22:41:12.034675  773966 cri.go:89] found id: "844da32e0557faa56becf52073bd2e1d4107c6dcd6a6994bf7b807ec687a20df"
	I1027 22:41:12.034695  773966 cri.go:89] found id: "78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d"
	I1027 22:41:12.034702  773966 cri.go:89] found id: "943e0d285e380306579142f00ea866adbc1a6d3e36fe8de0c8f3a0cfa6d58fda"
	I1027 22:41:12.034705  773966 cri.go:89] found id: ""
	I1027 22:41:12.034759  773966 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:41:12.053257  773966 out.go:203] 
	W1027 22:41:12.054246  773966 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:41:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:41:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:41:12.054272  773966 out.go:285] * 
	* 
	W1027 22:41:12.060153  773966 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:41:12.061162  773966 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-927034 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-927034
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-927034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a",
	        "Created": "2025-10-27T22:39:00.365066876Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 753941,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:40:07.686275778Z",
	            "FinishedAt": "2025-10-27T22:40:06.677682314Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/hosts",
	        "LogPath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a-json.log",
	        "Name": "/default-k8s-diff-port-927034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-927034:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-927034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a",
	                "LowerDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-927034",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-927034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-927034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-927034",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-927034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9f068dfb11d0f58e080b8853e862fb40d0205711c5deaa2d6ca1996c706d09d",
	            "SandboxKey": "/var/run/docker/netns/a9f068dfb11d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-927034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:3a:ec:7b:df:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "25e72b99ac2bb46615ab3180c2d17b65b027e144e1892b4833bd16fb1b4eb32a",
	                    "EndpointID": "d9d6b22e147667ac4a9b899d3f00c3babf3075afe9047b3ed59d797c37fced52",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-927034",
	                        "d0fdd499dd47"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034: exit status 2 (378.509701ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-927034 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-927034 logs -n 25: (1.384362031s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-293335 sudo systemctl cat docker --no-pager                                                                                                                │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo docker system info                                                                                                                             │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ delete  │ -p embed-certs-829976                                                                                                                                              │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ start   │ -p kindnet-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                           │ kindnet-293335               │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cri-dockerd --version                                                                                                                          │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ delete  │ -p newest-cni-290425                                                                                                                                               │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p calico-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-293335                │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo containerd config dump                                                                                                                         │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo crio config                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ delete  │ -p auto-293335                                                                                                                                                     │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p custom-flannel-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-293335        │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ image   │ default-k8s-diff-port-927034 image list --format=json                                                                                                              │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:41 UTC │ 27 Oct 25 22:41 UTC │
	│ pause   │ -p default-k8s-diff-port-927034 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:40:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:40:51.704540  769174 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:40:51.704910  769174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:51.704924  769174 out.go:374] Setting ErrFile to fd 2...
	I1027 22:40:51.704932  769174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:51.705278  769174 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:40:51.706302  769174 out.go:368] Setting JSON to false
	I1027 22:40:51.708157  769174 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8591,"bootTime":1761596261,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:40:51.708260  769174 start.go:143] virtualization: kvm guest
	I1027 22:40:51.710046  769174 out.go:179] * [custom-flannel-293335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:40:51.711512  769174 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:40:51.711556  769174 notify.go:221] Checking for updates...
	I1027 22:40:51.713429  769174 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:40:51.714559  769174 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:51.716536  769174 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:40:51.717688  769174 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:40:51.718762  769174 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:40:51.720331  769174 config.go:182] Loaded profile config "calico-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:51.720469  769174 config.go:182] Loaded profile config "default-k8s-diff-port-927034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:51.720610  769174 config.go:182] Loaded profile config "kindnet-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:51.720715  769174 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:40:51.748412  769174 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:40:51.748510  769174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:51.813919  769174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:80 SystemTime:2025-10-27 22:40:51.803177553 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:51.814113  769174 docker.go:318] overlay module found
	I1027 22:40:51.815601  769174 out.go:179] * Using the docker driver based on user configuration
	I1027 22:40:51.816553  769174 start.go:307] selected driver: docker
	I1027 22:40:51.816577  769174 start.go:928] validating driver "docker" against <nil>
	I1027 22:40:51.816599  769174 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:40:51.817288  769174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:51.894340  769174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-27 22:40:51.882710033 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:51.894603  769174 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:40:51.894892  769174 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:40:51.898473  769174 out.go:179] * Using Docker driver with root privileges
	I1027 22:40:51.899513  769174 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1027 22:40:51.899555  769174 start_flags.go:335] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1027 22:40:51.899664  769174 start.go:351] cluster config:
	{Name:custom-flannel-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:51.900932  769174 out.go:179] * Starting "custom-flannel-293335" primary control-plane node in "custom-flannel-293335" cluster
	I1027 22:40:51.902454  769174 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:40:51.903618  769174 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:40:51.904878  769174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:51.904925  769174 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:40:51.904930  769174 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:40:51.904962  769174 cache.go:59] Caching tarball of preloaded images
	I1027 22:40:51.905086  769174 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:40:51.905105  769174 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:40:51.905238  769174 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/config.json ...
	I1027 22:40:51.905265  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/config.json: {Name:mk3ce478049d79270c8b348738fd744d03d55050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:51.930034  769174 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:40:51.930062  769174 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:40:51.930084  769174 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:40:51.930116  769174 start.go:360] acquireMachinesLock for custom-flannel-293335: {Name:mk8bc4d416d94d524af58772a15b2831e6e4bb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:40:51.930224  769174 start.go:364] duration metric: took 85.39µs to acquireMachinesLock for "custom-flannel-293335"
	I1027 22:40:51.930257  769174 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-293335 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:40:51.930356  769174 start.go:125] createHost starting for "" (driver="docker")
	W1027 22:40:49.688058  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:51.689800  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:48.240140  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Running}}
	I1027 22:40:48.267417  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:40:48.292741  764907 cli_runner.go:164] Run: docker exec kindnet-293335 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:40:48.343720  764907 oci.go:144] the created container "kindnet-293335" has a running status.
	I1027 22:40:48.343767  764907 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa...
	I1027 22:40:49.180234  764907 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:40:49.290478  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:40:49.308149  764907 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:40:49.308178  764907 kic_runner.go:114] Args: [docker exec --privileged kindnet-293335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:40:49.357719  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:40:49.376767  764907 machine.go:94] provisionDockerMachine start ...
	I1027 22:40:49.376854  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:49.395082  764907 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:49.395364  764907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1027 22:40:49.395382  764907 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:40:49.538199  764907 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-293335
	
	I1027 22:40:49.538241  764907 ubuntu.go:182] provisioning hostname "kindnet-293335"
	I1027 22:40:49.538315  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:49.559538  764907 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:49.559759  764907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1027 22:40:49.559773  764907 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-293335 && echo "kindnet-293335" | sudo tee /etc/hostname
	I1027 22:40:49.777617  764907 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-293335
	
	I1027 22:40:49.777738  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:49.799981  764907 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:49.800245  764907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1027 22:40:49.800272  764907 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-293335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-293335/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-293335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:40:49.943822  764907 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:40:49.943852  764907 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:40:49.943915  764907 ubuntu.go:190] setting up certificates
	I1027 22:40:49.943928  764907 provision.go:84] configureAuth start
	I1027 22:40:49.943993  764907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-293335
	I1027 22:40:49.960685  764907 provision.go:143] copyHostCerts
	I1027 22:40:49.960752  764907 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:40:49.960768  764907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:40:49.975047  764907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:40:49.975199  764907 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:40:49.975214  764907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:40:49.975269  764907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:40:49.975374  764907 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:40:49.975387  764907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:40:49.975425  764907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:40:49.975502  764907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.kindnet-293335 san=[127.0.0.1 192.168.85.2 kindnet-293335 localhost minikube]
	I1027 22:40:50.076492  764907 provision.go:177] copyRemoteCerts
	I1027 22:40:50.076547  764907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:40:50.076582  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:50.098993  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:40:50.205142  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:40:50.229537  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1027 22:40:50.269600  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:40:50.288919  764907 provision.go:87] duration metric: took 344.974229ms to configureAuth
	I1027 22:40:50.288976  764907 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:40:50.289173  764907 config.go:182] Loaded profile config "kindnet-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:50.289297  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:50.308506  764907 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:50.308791  764907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1027 22:40:50.308816  764907 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:40:50.805862  764907 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:40:50.805909  764907 machine.go:97] duration metric: took 1.429120825s to provisionDockerMachine
	I1027 22:40:50.805922  764907 client.go:176] duration metric: took 7.420685863s to LocalClient.Create
	I1027 22:40:50.805965  764907 start.go:167] duration metric: took 7.420742159s to libmachine.API.Create "kindnet-293335"
	I1027 22:40:50.805978  764907 start.go:293] postStartSetup for "kindnet-293335" (driver="docker")
	I1027 22:40:50.805992  764907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:40:50.806051  764907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:40:50.806096  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:50.824020  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:40:50.930823  764907 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:40:50.935313  764907 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:40:50.935351  764907 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:40:50.935366  764907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:40:50.935430  764907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:40:50.935552  764907 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:40:50.935685  764907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:40:50.944597  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:50.968709  764907 start.go:296] duration metric: took 162.705731ms for postStartSetup
	I1027 22:40:50.969161  764907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-293335
	I1027 22:40:50.987389  764907 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/config.json ...
	I1027 22:40:50.987645  764907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:40:50.987696  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:51.007650  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:40:51.107441  764907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:40:51.112445  764907 start.go:128] duration metric: took 7.729036968s to createHost
	I1027 22:40:51.112480  764907 start.go:83] releasing machines lock for "kindnet-293335", held for 7.729154496s
	I1027 22:40:51.112557  764907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-293335
	I1027 22:40:51.130544  764907 ssh_runner.go:195] Run: cat /version.json
	I1027 22:40:51.130633  764907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:40:51.130650  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:51.130716  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:51.151229  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:40:51.151270  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:40:51.255443  764907 ssh_runner.go:195] Run: systemctl --version
	I1027 22:40:51.314740  764907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:40:51.353700  764907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:40:51.359333  764907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:40:51.359416  764907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:40:51.448089  764907 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:40:51.448114  764907 start.go:496] detecting cgroup driver to use...
	I1027 22:40:51.448148  764907 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:40:51.448193  764907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:40:51.467289  764907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:40:51.486592  764907 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:40:51.486658  764907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:40:51.509280  764907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:40:51.529771  764907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:40:51.630099  764907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:40:51.746804  764907 docker.go:234] disabling docker service ...
	I1027 22:40:51.746872  764907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:40:51.769835  764907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:40:51.787656  764907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:40:51.900922  764907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:40:52.031237  764907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:40:52.053455  764907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:40:52.068525  764907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:40:52.069028  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.081877  764907 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:40:52.081939  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.091841  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.102178  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.114004  764907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:40:52.125359  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.138152  764907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.159281  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.170137  764907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:40:52.179202  764907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:40:52.200523  764907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:52.318765  764907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:40:52.456883  764907 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:40:52.456981  764907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:40:52.461403  764907 start.go:564] Will wait 60s for crictl version
	I1027 22:40:52.461473  764907 ssh_runner.go:195] Run: which crictl
	I1027 22:40:52.466594  764907 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:40:52.499358  764907 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:40:52.499451  764907 ssh_runner.go:195] Run: crio --version
	I1027 22:40:52.529884  764907 ssh_runner.go:195] Run: crio --version
	I1027 22:40:52.570385  764907 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:40:52.571747  764907 cli_runner.go:164] Run: docker network inspect kindnet-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:40:52.591547  764907 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 22:40:52.596552  764907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:52.608285  764907 kubeadm.go:884] updating cluster {Name:kindnet-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:40:52.608439  764907 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:52.608505  764907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:52.647446  764907 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:52.647472  764907 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:40:52.647529  764907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:52.682525  764907 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:52.682551  764907 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:40:52.682560  764907 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 22:40:52.682730  764907 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-293335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1027 22:40:52.682821  764907 ssh_runner.go:195] Run: crio config
	I1027 22:40:52.750528  764907 cni.go:84] Creating CNI manager for "kindnet"
	I1027 22:40:52.750574  764907 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:40:52.750604  764907 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-293335 NodeName:kindnet-293335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:40:52.750787  764907 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-293335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:40:52.750862  764907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:40:52.760518  764907 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:40:52.760585  764907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:40:52.770536  764907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1027 22:40:52.785111  764907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:40:52.801466  764907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1027 22:40:52.815738  764907 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:40:52.819968  764907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:52.830160  764907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:52.915668  764907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:52.949032  764907 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335 for IP: 192.168.85.2
	I1027 22:40:52.949057  764907 certs.go:195] generating shared ca certs ...
	I1027 22:40:52.949079  764907 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:52.949252  764907 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:40:52.949303  764907 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:40:52.949316  764907 certs.go:257] generating profile certs ...
	I1027 22:40:52.949391  764907 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.key
	I1027 22:40:52.949408  764907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.crt with IP's: []
	I1027 22:40:51.446634  766237 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-293335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.367722896s)
	I1027 22:40:51.446674  766237 kic.go:203] duration metric: took 3.367867287s to extract preloaded images to volume ...
	W1027 22:40:51.446771  766237 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 22:40:51.446821  766237 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 22:40:51.446876  766237 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:40:51.510996  766237 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-293335 --name calico-293335 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-293335 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-293335 --network calico-293335 --ip 192.168.76.2 --volume calico-293335:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:40:51.865318  766237 cli_runner.go:164] Run: docker container inspect calico-293335 --format={{.State.Running}}
	I1027 22:40:51.888864  766237 cli_runner.go:164] Run: docker container inspect calico-293335 --format={{.State.Status}}
	I1027 22:40:51.909765  766237 cli_runner.go:164] Run: docker exec calico-293335 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:40:51.969116  766237 oci.go:144] the created container "calico-293335" has a running status.
	I1027 22:40:51.969162  766237 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa...
	I1027 22:40:52.208036  766237 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:40:52.258322  766237 cli_runner.go:164] Run: docker container inspect calico-293335 --format={{.State.Status}}
	I1027 22:40:52.280060  766237 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:40:52.280087  766237 kic_runner.go:114] Args: [docker exec --privileged calico-293335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:40:52.340055  766237 cli_runner.go:164] Run: docker container inspect calico-293335 --format={{.State.Status}}
	I1027 22:40:52.359580  766237 machine.go:94] provisionDockerMachine start ...
	I1027 22:40:52.359688  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:52.380278  766237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:52.380639  766237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1027 22:40:52.380668  766237 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:40:52.537095  766237 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-293335
	
	I1027 22:40:52.537131  766237 ubuntu.go:182] provisioning hostname "calico-293335"
	I1027 22:40:52.537200  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:52.561910  766237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:52.562445  766237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1027 22:40:52.562472  766237 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-293335 && echo "calico-293335" | sudo tee /etc/hostname
	I1027 22:40:52.734619  766237 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-293335
	
	I1027 22:40:52.734707  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:52.753676  766237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:52.753995  766237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1027 22:40:52.754027  766237 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-293335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-293335/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-293335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:40:52.903843  766237 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:40:52.903874  766237 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:40:52.903897  766237 ubuntu.go:190] setting up certificates
	I1027 22:40:52.903912  766237 provision.go:84] configureAuth start
	I1027 22:40:52.904021  766237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-293335
	I1027 22:40:52.925391  766237 provision.go:143] copyHostCerts
	I1027 22:40:52.925465  766237 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:40:52.925491  766237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:40:52.925589  766237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:40:52.925729  766237 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:40:52.925742  766237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:40:52.925785  766237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:40:52.925891  766237 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:40:52.925902  766237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:40:52.925938  766237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:40:52.926063  766237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.calico-293335 san=[127.0.0.1 192.168.76.2 calico-293335 localhost minikube]
	I1027 22:40:53.029282  766237 provision.go:177] copyRemoteCerts
	I1027 22:40:53.029337  766237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:40:53.029370  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.046989  766237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa Username:docker}
	I1027 22:40:53.149786  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:40:53.170180  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 22:40:53.188150  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:40:53.205256  766237 provision.go:87] duration metric: took 301.326822ms to configureAuth
	I1027 22:40:53.205285  766237 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:40:53.205439  766237 config.go:182] Loaded profile config "calico-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:53.205592  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.224194  766237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:53.224503  766237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1027 22:40:53.224543  766237 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:40:53.497979  766237 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:40:53.498007  766237 machine.go:97] duration metric: took 1.138401268s to provisionDockerMachine
	I1027 22:40:53.498017  766237 client.go:176] duration metric: took 8.239974406s to LocalClient.Create
	I1027 22:40:53.498040  766237 start.go:167] duration metric: took 8.240050323s to libmachine.API.Create "calico-293335"
	I1027 22:40:53.498051  766237 start.go:293] postStartSetup for "calico-293335" (driver="docker")
	I1027 22:40:53.498064  766237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:40:53.498128  766237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:40:53.498195  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.519767  766237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa Username:docker}
	I1027 22:40:53.623886  766237 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:40:53.627827  766237 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:40:53.627860  766237 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:40:53.627873  766237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:40:53.627917  766237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:40:53.628068  766237 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:40:53.628204  766237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:40:53.636125  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:53.656207  766237 start.go:296] duration metric: took 158.142832ms for postStartSetup
	I1027 22:40:53.656553  766237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-293335
	I1027 22:40:53.676775  766237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/config.json ...
	I1027 22:40:53.677102  766237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:40:53.677159  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.699406  766237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa Username:docker}
	I1027 22:40:53.801467  766237 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:40:53.806541  766237 start.go:128] duration metric: took 8.550734809s to createHost
	I1027 22:40:53.806573  766237 start.go:83] releasing machines lock for "calico-293335", held for 8.550920879s
	I1027 22:40:53.806657  766237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-293335
	I1027 22:40:53.824637  766237 ssh_runner.go:195] Run: cat /version.json
	I1027 22:40:53.824692  766237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:40:53.824705  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.824760  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.843390  766237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa Username:docker}
	I1027 22:40:53.845254  766237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa Username:docker}
	I1027 22:40:54.002508  766237 ssh_runner.go:195] Run: systemctl --version
	I1027 22:40:54.009727  766237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:40:54.048990  766237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:40:54.054419  766237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:40:54.054478  766237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:40:54.082233  766237 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:40:54.082261  766237 start.go:496] detecting cgroup driver to use...
	I1027 22:40:54.082295  766237 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:40:54.082361  766237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:40:54.101079  766237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:40:54.113791  766237 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:40:54.113854  766237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:40:54.132045  766237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:40:54.151507  766237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:40:54.259215  766237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:40:54.380935  766237 docker.go:234] disabling docker service ...
	I1027 22:40:54.381026  766237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:40:54.402082  766237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:40:54.415939  766237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:40:54.524038  766237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:40:54.630569  766237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:40:54.643844  766237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:40:54.659000  766237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:40:54.659073  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.671148  766237 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:40:54.671216  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.681376  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.691671  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.700257  766237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:40:54.708994  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.718844  766237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.734139  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.743246  766237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:40:54.751043  766237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:40:54.758461  766237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:54.846192  766237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:40:51.932890  769174 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:40:51.933235  769174 start.go:159] libmachine.API.Create for "custom-flannel-293335" (driver="docker")
	I1027 22:40:51.933279  769174 client.go:173] LocalClient.Create starting
	I1027 22:40:51.933368  769174 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 22:40:51.933418  769174 main.go:143] libmachine: Decoding PEM data...
	I1027 22:40:51.933444  769174 main.go:143] libmachine: Parsing certificate...
	I1027 22:40:51.933542  769174 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 22:40:51.933578  769174 main.go:143] libmachine: Decoding PEM data...
	I1027 22:40:51.933591  769174 main.go:143] libmachine: Parsing certificate...
	I1027 22:40:51.934061  769174 cli_runner.go:164] Run: docker network inspect custom-flannel-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:40:51.957619  769174 cli_runner.go:211] docker network inspect custom-flannel-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:40:51.957682  769174 network_create.go:284] running [docker network inspect custom-flannel-293335] to gather additional debugging logs...
	I1027 22:40:51.957738  769174 cli_runner.go:164] Run: docker network inspect custom-flannel-293335
	W1027 22:40:51.978287  769174 cli_runner.go:211] docker network inspect custom-flannel-293335 returned with exit code 1
	I1027 22:40:51.978331  769174 network_create.go:287] error running [docker network inspect custom-flannel-293335]: docker network inspect custom-flannel-293335: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-293335 not found
	I1027 22:40:51.978351  769174 network_create.go:289] output of [docker network inspect custom-flannel-293335]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-293335 not found
	
	** /stderr **
	I1027 22:40:51.978500  769174 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:40:51.998776  769174 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
	I1027 22:40:51.999808  769174 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2deffb37428 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:63:99:4f:c9:29} reservation:<nil>}
	I1027 22:40:52.000370  769174 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8aa1ad217c0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:19:7b:f4:de:20} reservation:<nil>}
	I1027 22:40:52.001238  769174 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4ce6e82cd489 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:d8:04:42:9a:06} reservation:<nil>}
	I1027 22:40:52.002080  769174 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-608fda872b8d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ea:32:7c:76:35:72} reservation:<nil>}
	I1027 22:40:52.003151  769174 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e5c6f0}
	I1027 22:40:52.003184  769174 network_create.go:124] attempt to create docker network custom-flannel-293335 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1027 22:40:52.003252  769174 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-293335 custom-flannel-293335
	I1027 22:40:52.073623  769174 network_create.go:108] docker network custom-flannel-293335 192.168.94.0/24 created
	I1027 22:40:52.073653  769174 kic.go:121] calculated static IP "192.168.94.2" for the "custom-flannel-293335" container
	I1027 22:40:52.073720  769174 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:40:52.092023  769174 cli_runner.go:164] Run: docker volume create custom-flannel-293335 --label name.minikube.sigs.k8s.io=custom-flannel-293335 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:40:52.113578  769174 oci.go:103] Successfully created a docker volume custom-flannel-293335
	I1027 22:40:52.113659  769174 cli_runner.go:164] Run: docker run --rm --name custom-flannel-293335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-293335 --entrypoint /usr/bin/test -v custom-flannel-293335:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:40:52.581594  769174 oci.go:107] Successfully prepared a docker volume custom-flannel-293335
	I1027 22:40:52.581652  769174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:52.581679  769174 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:40:52.581760  769174 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-293335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:40:57.183824  766237 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.337587879s)
	I1027 22:40:57.183872  766237 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:40:57.183934  766237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:40:57.188627  766237 start.go:564] Will wait 60s for crictl version
	I1027 22:40:57.188672  766237 ssh_runner.go:195] Run: which crictl
	I1027 22:40:57.192665  766237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:40:57.220069  766237 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:40:57.220133  766237 ssh_runner.go:195] Run: crio --version
	I1027 22:40:57.252385  766237 ssh_runner.go:195] Run: crio --version
	I1027 22:40:57.290685  766237 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 22:40:54.194496  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:56.741030  753728 pod_ready.go:94] pod "coredns-66bc5c9577-bvr8f" is "Ready"
	I1027 22:40:56.741063  753728 pod_ready.go:86] duration metric: took 39.05899716s for pod "coredns-66bc5c9577-bvr8f" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.743363  753728 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.747077  753728 pod_ready.go:94] pod "etcd-default-k8s-diff-port-927034" is "Ready"
	I1027 22:40:56.747096  753728 pod_ready.go:86] duration metric: took 3.70853ms for pod "etcd-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.748964  753728 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.752297  753728 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-927034" is "Ready"
	I1027 22:40:56.752319  753728 pod_ready.go:86] duration metric: took 3.336424ms for pod "kube-apiserver-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.753900  753728 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.964146  753728 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-927034" is "Ready"
	I1027 22:40:56.964180  753728 pod_ready.go:86] duration metric: took 210.259581ms for pod "kube-controller-manager-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:57.085513  753728 pod_ready.go:83] waiting for pod "kube-proxy-42dj4" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:57.485865  753728 pod_ready.go:94] pod "kube-proxy-42dj4" is "Ready"
	I1027 22:40:57.485893  753728 pod_ready.go:86] duration metric: took 400.353128ms for pod "kube-proxy-42dj4" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:57.690363  753728 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:58.085850  753728 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-927034" is "Ready"
	I1027 22:40:58.085884  753728 pod_ready.go:86] duration metric: took 395.468301ms for pod "kube-scheduler-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:58.085898  753728 pod_ready.go:40] duration metric: took 40.407360832s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:40:58.136014  753728 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:40:58.137054  753728 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-927034" cluster and "default" namespace by default
	I1027 22:40:53.234463  764907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.crt ...
	I1027 22:40:53.234492  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.crt: {Name:mk8a2a4e4b1b7f25a50930365fa42a1aeaf808e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.234671  764907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.key ...
	I1027 22:40:53.234686  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.key: {Name:mk17fe7e0b8300369bdd6fde5af683c8c3797d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.234794  764907 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key.bc0cab6b
	I1027 22:40:53.234810  764907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt.bc0cab6b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 22:40:53.747227  764907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt.bc0cab6b ...
	I1027 22:40:53.747254  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt.bc0cab6b: {Name:mkacd917aaf5e9d405b91cc9b91d2556a5c51006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.747438  764907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key.bc0cab6b ...
	I1027 22:40:53.747456  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key.bc0cab6b: {Name:mka5d7a523b0434d926752f0e350c36c00626981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.747559  764907 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt.bc0cab6b -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt
	I1027 22:40:53.747677  764907 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key.bc0cab6b -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key
	I1027 22:40:53.747761  764907 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.key
	I1027 22:40:53.747784  764907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.crt with IP's: []
	I1027 22:40:53.867931  764907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.crt ...
	I1027 22:40:53.867971  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.crt: {Name:mkcb8d2bf0f39ac86fa2fc73b388ddace10578a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.868164  764907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.key ...
	I1027 22:40:53.868184  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.key: {Name:mk742361a9670894b2da88022d7c0a09ae9546b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.868413  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:40:53.868453  764907 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:40:53.868467  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:40:53.868497  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:40:53.868521  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:40:53.868548  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:40:53.868599  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:53.869171  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:40:53.889070  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:40:53.909429  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:40:53.930564  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:40:53.949254  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 22:40:53.969006  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:40:53.989244  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:40:54.008096  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:40:54.029180  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:40:54.052612  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:40:54.072953  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:40:54.092106  764907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:40:54.106701  764907 ssh_runner.go:195] Run: openssl version
	I1027 22:40:54.113393  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:40:54.122899  764907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:54.126935  764907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:54.127000  764907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:54.164632  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:40:54.173835  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:40:54.184434  764907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:40:54.192984  764907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:40:54.193074  764907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:40:54.247930  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:40:54.257906  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:40:54.267831  764907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:40:54.271852  764907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:40:54.271914  764907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:40:54.333361  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:40:54.342635  764907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:40:54.346503  764907 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:40:54.346562  764907 kubeadm.go:401] StartCluster: {Name:kindnet-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:54.346636  764907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:40:54.346690  764907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:40:54.378097  764907 cri.go:89] found id: ""
	I1027 22:40:54.378178  764907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:40:54.387988  764907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:40:54.396257  764907 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:40:54.396316  764907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:40:54.405014  764907 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:40:54.405036  764907 kubeadm.go:158] found existing configuration files:
	
	I1027 22:40:54.405078  764907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:40:54.412982  764907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:40:54.413037  764907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:40:54.420999  764907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:40:54.428872  764907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:40:54.428930  764907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:40:54.437438  764907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:40:54.447248  764907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:40:54.447310  764907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:40:54.461700  764907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:40:54.471703  764907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:40:54.471770  764907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:40:54.480229  764907 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:40:54.548278  764907 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 22:40:54.620849  764907 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:40:57.291849  766237 cli_runner.go:164] Run: docker network inspect calico-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:40:57.309524  766237 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 22:40:57.313748  766237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:57.324782  766237 kubeadm.go:884] updating cluster {Name:calico-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:40:57.324939  766237 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:57.325034  766237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:57.361809  766237 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:57.361833  766237 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:40:57.361887  766237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:57.391558  766237 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:57.391577  766237 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:40:57.391585  766237 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 22:40:57.391680  766237 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-293335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1027 22:40:57.391743  766237 ssh_runner.go:195] Run: crio config
	I1027 22:40:57.459773  766237 cni.go:84] Creating CNI manager for "calico"
	I1027 22:40:57.459803  766237 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:40:57.459824  766237 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-293335 NodeName:calico-293335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:40:57.459934  766237 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-293335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:40:57.460013  766237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:40:57.471187  766237 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:40:57.471258  766237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:40:57.480735  766237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1027 22:40:57.497542  766237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:40:57.516915  766237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1027 22:40:57.530417  766237 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:40:57.534898  766237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:57.546283  766237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:57.652241  766237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:57.674264  766237 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335 for IP: 192.168.76.2
	I1027 22:40:57.674285  766237 certs.go:195] generating shared ca certs ...
	I1027 22:40:57.674306  766237 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:57.674490  766237 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:40:57.674550  766237 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:40:57.674563  766237 certs.go:257] generating profile certs ...
	I1027 22:40:57.674640  766237 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.key
	I1027 22:40:57.674655  766237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.crt with IP's: []
	I1027 22:40:57.810865  766237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.crt ...
	I1027 22:40:57.810893  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.crt: {Name:mk629e57e640b2d978cc7e13e15f6398293dfeea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:57.811151  766237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.key ...
	I1027 22:40:57.811181  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.key: {Name:mk2a201a94556c2d8f3f8e188277f1d484d58800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:57.811317  766237 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key.897c17e8
	I1027 22:40:57.811341  766237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt.897c17e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 22:40:58.163850  766237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt.897c17e8 ...
	I1027 22:40:58.163916  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt.897c17e8: {Name:mkf2dbf16899bc5e31429a004da738ca0eecd618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:58.164152  766237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key.897c17e8 ...
	I1027 22:40:58.164221  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key.897c17e8: {Name:mkc75ce064a0a25d6f8f99ff3ef6715f417c64f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:58.164397  766237 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt.897c17e8 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt
	I1027 22:40:58.164531  766237 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key.897c17e8 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key
	I1027 22:40:58.164639  766237 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.key
	I1027 22:40:58.164680  766237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.crt with IP's: []
	I1027 22:40:58.716536  766237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.crt ...
	I1027 22:40:58.716564  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.crt: {Name:mkd20b58ab16cc7fcddd160ed6065699ca0a847c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:58.716744  766237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.key ...
	I1027 22:40:58.716756  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.key: {Name:mk7ed32c95e75a8c4ed6a2b273255eb2129c50d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:58.716929  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:40:58.716986  766237 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:40:58.717001  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:40:58.717029  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:40:58.717054  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:40:58.717075  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:40:58.717124  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:58.717699  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:40:58.736363  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:40:58.753781  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:40:58.770568  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:40:58.787800  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 22:40:58.805810  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:40:58.824258  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:40:58.843071  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:40:58.861187  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:40:58.880304  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:40:58.897432  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:40:58.915417  766237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:40:58.928692  766237 ssh_runner.go:195] Run: openssl version
	I1027 22:40:58.935482  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:40:58.944753  766237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:58.949220  766237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:58.949288  766237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:58.986673  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:40:58.995832  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:40:59.004727  766237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:40:59.008427  766237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:40:59.008484  766237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:40:59.043062  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:40:59.052259  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:40:59.060637  766237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:40:59.064417  766237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:40:59.064479  766237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:40:59.099738  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:40:59.109048  766237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:40:59.112950  766237 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:40:59.113019  766237 kubeadm.go:401] StartCluster: {Name:calico-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:59.113095  766237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:40:59.113139  766237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:40:59.142926  766237 cri.go:89] found id: ""
	I1027 22:40:59.143015  766237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:40:59.151697  766237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:40:59.160335  766237 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:40:59.160409  766237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:40:59.169171  766237 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:40:59.169189  766237 kubeadm.go:158] found existing configuration files:
	
	I1027 22:40:59.169256  766237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:40:59.180395  766237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:40:59.180571  766237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:40:59.191609  766237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:40:59.200812  766237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:40:59.200885  766237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:40:59.210313  766237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:40:59.220029  766237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:40:59.220112  766237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:40:59.228175  766237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:40:59.237010  766237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:40:59.237078  766237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:40:59.245045  766237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:40:59.286080  766237 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:40:59.286157  766237 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:40:59.309426  766237 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:40:59.309527  766237 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:40:59.309579  766237 kubeadm.go:319] OS: Linux
	I1027 22:40:59.309662  766237 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:40:59.309753  766237 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:40:59.309827  766237 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:40:59.309897  766237 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:40:59.309980  766237 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:40:59.310054  766237 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:40:59.310129  766237 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:40:59.310192  766237 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:40:59.370851  766237 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:40:59.370996  766237 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:40:59.371123  766237 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:40:59.379315  766237 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:40:59.381157  766237 out.go:252]   - Generating certificates and keys ...
	I1027 22:40:59.381258  766237 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:40:59.381349  766237 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:40:59.471724  766237 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:40:59.920277  766237 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:40:59.969156  766237 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:40:57.090281  769174 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-293335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.508476726s)
	I1027 22:40:57.090308  769174 kic.go:203] duration metric: took 4.508627655s to extract preloaded images to volume ...
	W1027 22:40:57.090401  769174 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 22:40:57.090444  769174 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 22:40:57.090501  769174 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:40:57.155616  769174 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-293335 --name custom-flannel-293335 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-293335 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-293335 --network custom-flannel-293335 --ip 192.168.94.2 --volume custom-flannel-293335:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:40:57.444277  769174 cli_runner.go:164] Run: docker container inspect custom-flannel-293335 --format={{.State.Running}}
	I1027 22:40:57.464933  769174 cli_runner.go:164] Run: docker container inspect custom-flannel-293335 --format={{.State.Status}}
	I1027 22:40:57.486157  769174 cli_runner.go:164] Run: docker exec custom-flannel-293335 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:40:57.534261  769174 oci.go:144] the created container "custom-flannel-293335" has a running status.
	I1027 22:40:57.534293  769174 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa...
	I1027 22:40:57.568573  769174 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:40:57.601245  769174 cli_runner.go:164] Run: docker container inspect custom-flannel-293335 --format={{.State.Status}}
	I1027 22:40:57.627483  769174 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:40:57.627512  769174 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-293335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:40:57.682602  769174 cli_runner.go:164] Run: docker container inspect custom-flannel-293335 --format={{.State.Status}}
	I1027 22:40:57.706361  769174 machine.go:94] provisionDockerMachine start ...
	I1027 22:40:57.706466  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:40:57.730720  769174 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:57.731121  769174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1027 22:40:57.731246  769174 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:40:57.731978  769174 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35428->127.0.0.1:33118: read: connection reset by peer
	I1027 22:41:00.897267  769174 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-293335
	
	I1027 22:41:00.897301  769174 ubuntu.go:182] provisioning hostname "custom-flannel-293335"
	I1027 22:41:00.897368  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:00.923192  769174 main.go:143] libmachine: Using SSH client type: native
	I1027 22:41:00.923413  769174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1027 22:41:00.923426  769174 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-293335 && echo "custom-flannel-293335" | sudo tee /etc/hostname
	I1027 22:41:01.081500  769174 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-293335
	
	I1027 22:41:01.081589  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:01.103293  769174 main.go:143] libmachine: Using SSH client type: native
	I1027 22:41:01.103582  769174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1027 22:41:01.103602  769174 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-293335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-293335/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-293335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:41:01.249302  769174 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:41:01.249346  769174 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:41:01.249378  769174 ubuntu.go:190] setting up certificates
	I1027 22:41:01.249396  769174 provision.go:84] configureAuth start
	I1027 22:41:01.249467  769174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-293335
	I1027 22:41:01.269271  769174 provision.go:143] copyHostCerts
	I1027 22:41:01.269337  769174 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:41:01.269352  769174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:41:01.269442  769174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:41:01.269570  769174 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:41:01.269580  769174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:41:01.269617  769174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:41:01.269704  769174 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:41:01.269713  769174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:41:01.269744  769174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:41:01.269825  769174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-293335 san=[127.0.0.1 192.168.94.2 custom-flannel-293335 localhost minikube]
	I1027 22:41:01.737659  769174 provision.go:177] copyRemoteCerts
	I1027 22:41:01.737728  769174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:41:01.737791  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:01.760868  769174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa Username:docker}
	I1027 22:41:01.869920  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:41:01.893759  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:41:01.917866  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1027 22:41:01.939251  769174 provision.go:87] duration metric: took 689.835044ms to configureAuth
	I1027 22:41:01.939284  769174 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:41:01.939494  769174 config.go:182] Loaded profile config "custom-flannel-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:41:01.939612  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:01.959285  769174 main.go:143] libmachine: Using SSH client type: native
	I1027 22:41:01.959634  769174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1027 22:41:01.959662  769174 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:41:02.285644  769174 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:41:02.285676  769174 machine.go:97] duration metric: took 4.579290946s to provisionDockerMachine
	I1027 22:41:02.285690  769174 client.go:176] duration metric: took 10.352400611s to LocalClient.Create
	I1027 22:41:02.285718  769174 start.go:167] duration metric: took 10.352486469s to libmachine.API.Create "custom-flannel-293335"
	I1027 22:41:02.285729  769174 start.go:293] postStartSetup for "custom-flannel-293335" (driver="docker")
	I1027 22:41:02.285747  769174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:41:02.285829  769174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:41:02.285897  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:02.306389  769174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa Username:docker}
	I1027 22:41:02.425456  769174 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:41:02.429252  769174 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:41:02.429281  769174 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:41:02.429292  769174 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:41:02.429411  769174 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:41:02.429515  769174 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:41:02.429638  769174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:41:02.437650  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:41:02.458607  769174 start.go:296] duration metric: took 172.856806ms for postStartSetup
	I1027 22:41:02.459004  769174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-293335
	I1027 22:41:02.486822  769174 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/config.json ...
	I1027 22:41:02.487191  769174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:41:02.487247  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:02.510754  769174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa Username:docker}
	I1027 22:41:02.615681  769174 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:41:02.621194  769174 start.go:128] duration metric: took 10.690821914s to createHost
	I1027 22:41:02.621221  769174 start.go:83] releasing machines lock for "custom-flannel-293335", held for 10.690983139s
	I1027 22:41:02.621307  769174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-293335
	I1027 22:41:02.643258  769174 ssh_runner.go:195] Run: cat /version.json
	I1027 22:41:02.643334  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:02.643351  769174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:41:02.643428  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:02.665468  769174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa Username:docker}
	I1027 22:41:02.666006  769174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa Username:docker}
	I1027 22:41:02.822349  769174 ssh_runner.go:195] Run: systemctl --version
	I1027 22:41:02.829326  769174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:41:02.866157  769174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:41:02.871295  769174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:41:02.871376  769174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:41:02.896721  769174 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:41:02.896747  769174 start.go:496] detecting cgroup driver to use...
	I1027 22:41:02.896778  769174 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:41:02.896840  769174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:41:02.919291  769174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:41:02.933526  769174 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:41:02.933577  769174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:41:02.952734  769174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:41:02.972547  769174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:41:03.057051  769174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:41:03.147312  769174 docker.go:234] disabling docker service ...
	I1027 22:41:03.147393  769174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:41:03.167432  769174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:41:03.180895  769174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:41:03.269331  769174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:41:03.363726  769174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:41:03.379511  769174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:41:03.397598  769174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:41:03.397662  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.409720  769174 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:41:03.409803  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.421351  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.433007  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.443855  769174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:41:03.454036  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.465173  769174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.482368  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.493715  769174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:41:03.503082  769174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:41:03.512231  769174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:41:03.612076  769174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:41:03.748140  769174 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:41:03.748212  769174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:41:03.753601  769174 start.go:564] Will wait 60s for crictl version
	I1027 22:41:03.753673  769174 ssh_runner.go:195] Run: which crictl
	I1027 22:41:03.758152  769174 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:41:03.789412  769174 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:41:03.789510  769174 ssh_runner.go:195] Run: crio --version
	I1027 22:41:03.824915  769174 ssh_runner.go:195] Run: crio --version
	I1027 22:41:03.862290  769174 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:41:00.129184  766237 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:41:00.289522  766237 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:41:00.289683  766237 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-293335 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 22:41:00.933672  766237 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:41:00.933919  766237 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-293335 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 22:41:01.191005  766237 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:41:01.725995  766237 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:41:02.192277  766237 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:41:02.192445  766237 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:41:02.442182  766237 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:41:02.972371  766237 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:41:03.414492  766237 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:41:03.851564  766237 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:41:04.114526  766237 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:41:04.114983  766237 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:41:04.118435  766237 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:41:04.119885  766237 out.go:252]   - Booting up control plane ...
	I1027 22:41:04.120021  766237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:41:04.120119  766237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:41:04.121812  766237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:41:04.138479  766237 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:41:04.138682  766237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:41:04.145096  766237 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:41:04.145374  766237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:41:04.145461  766237 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:41:04.254165  766237 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:41:04.254352  766237 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:41:04.755805  766237 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.795781ms
	I1027 22:41:04.760374  766237 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:41:04.760506  766237 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 22:41:04.760688  766237 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:41:04.760829  766237 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:41:05.680318  764907 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:41:05.680402  764907 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:41:05.680523  764907 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:41:05.680591  764907 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:41:05.680631  764907 kubeadm.go:319] OS: Linux
	I1027 22:41:05.680687  764907 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:41:05.680748  764907 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:41:05.680808  764907 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:41:05.680866  764907 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:41:05.680925  764907 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:41:05.680995  764907 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:41:05.681045  764907 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:41:05.681088  764907 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:41:05.681165  764907 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:41:05.681272  764907 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:41:05.681370  764907 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:41:05.681460  764907 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:41:05.683109  764907 out.go:252]   - Generating certificates and keys ...
	I1027 22:41:05.683284  764907 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:41:05.683462  764907 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:41:05.683561  764907 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:41:05.683633  764907 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:41:05.683708  764907 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:41:05.683770  764907 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:41:05.683832  764907 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:41:05.683980  764907 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-293335 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 22:41:05.684050  764907 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:41:05.684208  764907 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-293335 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 22:41:05.684295  764907 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:41:05.684368  764907 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:41:05.684428  764907 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:41:05.684503  764907 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:41:05.684563  764907 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:41:05.684624  764907 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:41:05.684681  764907 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:41:05.684762  764907 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:41:05.684827  764907 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:41:05.685010  764907 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:41:05.685161  764907 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:41:05.687327  764907 out.go:252]   - Booting up control plane ...
	I1027 22:41:05.687532  764907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:41:05.687794  764907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:41:05.687882  764907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:41:05.688055  764907 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:41:05.688176  764907 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:41:05.688419  764907 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:41:05.688718  764907 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:41:05.688904  764907 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:41:05.689239  764907 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:41:05.689497  764907 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:41:05.689647  764907 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001146285s
	I1027 22:41:05.690016  764907 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:41:05.690176  764907 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1027 22:41:05.690351  764907 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:41:05.690463  764907 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:41:05.690553  764907 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.129331634s
	I1027 22:41:05.690649  764907 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.232695742s
	I1027 22:41:05.690738  764907 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001408494s
	I1027 22:41:05.690877  764907 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:41:05.691034  764907 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:41:05.691115  764907 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:41:05.691425  764907 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-293335 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:41:05.691516  764907 kubeadm.go:319] [bootstrap-token] Using token: 529thl.08hybtrxaqjgjt94
	I1027 22:41:05.692906  764907 out.go:252]   - Configuring RBAC rules ...
	I1027 22:41:05.693089  764907 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:41:05.693268  764907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:41:05.693494  764907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:41:05.693655  764907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:41:05.693835  764907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:41:05.694000  764907 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:41:05.694185  764907 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:41:05.694259  764907 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:41:05.694325  764907 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:41:05.694333  764907 kubeadm.go:319] 
	I1027 22:41:05.694418  764907 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:41:05.694427  764907 kubeadm.go:319] 
	I1027 22:41:05.694526  764907 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:41:05.694534  764907 kubeadm.go:319] 
	I1027 22:41:05.694568  764907 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:41:05.694677  764907 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:41:05.694763  764907 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:41:05.694776  764907 kubeadm.go:319] 
	I1027 22:41:05.694855  764907 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:41:05.694865  764907 kubeadm.go:319] 
	I1027 22:41:05.694938  764907 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:41:05.694962  764907 kubeadm.go:319] 
	I1027 22:41:05.695036  764907 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:41:05.695151  764907 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:41:05.695245  764907 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:41:05.695258  764907 kubeadm.go:319] 
	I1027 22:41:05.695362  764907 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:41:05.695471  764907 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:41:05.695481  764907 kubeadm.go:319] 
	I1027 22:41:05.695586  764907 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 529thl.08hybtrxaqjgjt94 \
	I1027 22:41:05.695711  764907 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d \
	I1027 22:41:05.695740  764907 kubeadm.go:319] 	--control-plane 
	I1027 22:41:05.695745  764907 kubeadm.go:319] 
	I1027 22:41:05.695847  764907 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:41:05.695856  764907 kubeadm.go:319] 
	I1027 22:41:05.695968  764907 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 529thl.08hybtrxaqjgjt94 \
	I1027 22:41:05.696111  764907 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d 
	I1027 22:41:05.696128  764907 cni.go:84] Creating CNI manager for "kindnet"
	I1027 22:41:05.698125  764907 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:41:03.863325  769174 cli_runner.go:164] Run: docker network inspect custom-flannel-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:41:03.884107  769174 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 22:41:03.888928  769174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:41:03.900357  769174 kubeadm.go:884] updating cluster {Name:custom-flannel-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-293335 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:41:03.900494  769174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:41:03.900544  769174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:41:03.940214  769174 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:41:03.940247  769174 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:41:03.940310  769174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:41:03.971270  769174 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:41:03.971293  769174 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:41:03.971304  769174 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 22:41:03.971417  769174 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-293335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1027 22:41:03.971501  769174 ssh_runner.go:195] Run: crio config
	I1027 22:41:04.021190  769174 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1027 22:41:04.021235  769174 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:41:04.021257  769174 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-293335 NodeName:custom-flannel-293335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:41:04.021381  769174 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-293335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:41:04.021434  769174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:41:04.030545  769174 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:41:04.030616  769174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:41:04.039138  769174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1027 22:41:04.051976  769174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:41:04.066765  769174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1027 22:41:04.079719  769174 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:41:04.083739  769174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:41:04.094127  769174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:41:04.188546  769174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:41:04.213818  769174 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335 for IP: 192.168.94.2
	I1027 22:41:04.213842  769174 certs.go:195] generating shared ca certs ...
	I1027 22:41:04.213865  769174 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:04.214057  769174 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:41:04.214098  769174 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:41:04.214109  769174 certs.go:257] generating profile certs ...
	I1027 22:41:04.214163  769174 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.key
	I1027 22:41:04.214177  769174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.crt with IP's: []
	I1027 22:41:04.498919  769174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.crt ...
	I1027 22:41:04.498963  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.crt: {Name:mk3ecb20d0390181b7834facbabeb8a5d05066b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:04.499154  769174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.key ...
	I1027 22:41:04.499174  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.key: {Name:mkaea1120ea61308f96b400c93c5b59e919dea82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:04.499281  769174 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key.2e14aaf9
	I1027 22:41:04.499298  769174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt.2e14aaf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1027 22:41:04.795603  769174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt.2e14aaf9 ...
	I1027 22:41:04.795629  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt.2e14aaf9: {Name:mkdbc395cc94a13f41a68386e7b3bca65a674938 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:04.795788  769174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key.2e14aaf9 ...
	I1027 22:41:04.795801  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key.2e14aaf9: {Name:mk3149882b9a9d67741a80bfc99cbac6b9807826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:04.795876  769174 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt.2e14aaf9 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt
	I1027 22:41:04.795982  769174 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key.2e14aaf9 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key
	I1027 22:41:04.796069  769174 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.key
	I1027 22:41:04.796088  769174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.crt with IP's: []
	I1027 22:41:05.474482  769174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.crt ...
	I1027 22:41:05.474518  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.crt: {Name:mkec66ca4413058c5e161f02688fd59c9cc61a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:05.474737  769174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.key ...
	I1027 22:41:05.474754  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.key: {Name:mk663a8ce12e664e1f681fbe25a3d8c183eccd7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:05.474994  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:41:05.475048  769174 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:41:05.475062  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:41:05.475092  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:41:05.475122  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:41:05.475159  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:41:05.475213  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:41:05.475989  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:41:05.495978  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:41:05.515763  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:41:05.534682  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:41:05.552526  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 22:41:05.571594  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:41:05.591117  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:41:05.610181  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:41:05.629582  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:41:05.649120  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:41:05.667988  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:41:05.697529  769174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:41:05.714142  769174 ssh_runner.go:195] Run: openssl version
	I1027 22:41:05.721982  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:41:05.731186  769174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:41:05.735929  769174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:41:05.736381  769174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:41:05.783027  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:41:05.793801  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:41:05.805486  769174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:41:05.810194  769174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:41:05.810261  769174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:41:05.859062  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:41:05.869340  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:41:05.879097  769174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:41:05.883567  769174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:41:05.883629  769174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:41:05.931271  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:41:05.942113  769174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:41:05.946919  769174 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:41:05.946993  769174 kubeadm.go:401] StartCluster: {Name:custom-flannel-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:41:05.947104  769174 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:41:05.947181  769174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:41:05.983005  769174 cri.go:89] found id: ""
	I1027 22:41:05.983066  769174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:41:05.994200  769174 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:41:06.007198  769174 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:41:06.007258  769174 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:41:06.019604  769174 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:41:06.019629  769174 kubeadm.go:158] found existing configuration files:
	
	I1027 22:41:06.019679  769174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:41:06.031633  769174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:41:06.031708  769174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:41:06.041065  769174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:41:06.050194  769174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:41:06.050247  769174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:41:06.059729  769174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:41:06.070434  769174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:41:06.070479  769174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:41:06.078452  769174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:41:06.086906  769174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:41:06.086989  769174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:41:06.096058  769174 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:41:06.143632  769174 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:41:06.143696  769174 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:41:06.171260  769174 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:41:06.171356  769174 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:41:06.171449  769174 kubeadm.go:319] OS: Linux
	I1027 22:41:06.171526  769174 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:41:06.171596  769174 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:41:06.171687  769174 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:41:06.171766  769174 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:41:06.171852  769174 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:41:06.171937  769174 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:41:06.172035  769174 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:41:06.172112  769174 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:41:06.234043  769174 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:41:06.234208  769174 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:41:06.234328  769174 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:41:06.242281  769174 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:41:06.244762  769174 out.go:252]   - Generating certificates and keys ...
	I1027 22:41:06.244851  769174 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:41:06.244931  769174 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:41:06.505140  769174 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:41:05.699921  764907 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:41:05.705103  764907 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 22:41:05.705140  764907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:41:05.720463  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:41:05.979526  764907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:41:05.979623  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:05.979644  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-293335 minikube.k8s.io/updated_at=2025_10_27T22_41_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=kindnet-293335 minikube.k8s.io/primary=true
	I1027 22:41:06.088826  764907 ops.go:34] apiserver oom_adj: -16
	I1027 22:41:06.088847  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:06.589437  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:07.089088  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:07.588886  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:08.089047  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:05.766067  766237 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005522152s
	I1027 22:41:07.900615  766237 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.140173797s
	I1027 22:41:09.261490  766237 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501034492s
	I1027 22:41:09.273793  766237 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:41:09.283344  766237 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:41:09.291497  766237 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:41:09.291799  766237 kubeadm.go:319] [mark-control-plane] Marking the node calico-293335 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:41:09.299770  766237 kubeadm.go:319] [bootstrap-token] Using token: ae116e.rffapx0bx6ok1lcc
	I1027 22:41:09.301052  766237 out.go:252]   - Configuring RBAC rules ...
	I1027 22:41:09.301194  766237 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:41:09.304078  766237 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:41:09.308975  766237 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:41:09.311317  766237 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:41:09.313488  766237 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:41:09.317191  766237 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:41:09.668677  766237 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:41:07.063313  769174 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:41:07.357833  769174 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:41:07.467125  769174 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:41:07.730995  769174 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:41:07.731180  769174 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-293335 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 22:41:07.972039  769174 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:41:07.972261  769174 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-293335 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 22:41:08.421854  769174 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:41:08.736485  769174 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:41:09.136654  769174 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:41:09.136758  769174 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:41:09.685172  769174 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:41:10.010057  769174 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:41:10.148850  769174 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:41:10.486521  769174 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:41:10.568592  769174 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:41:10.569043  769174 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:41:10.572657  769174 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:41:10.081854  766237 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:41:10.672294  766237 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:41:10.672325  766237 kubeadm.go:319] 
	I1027 22:41:10.672418  766237 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:41:10.672425  766237 kubeadm.go:319] 
	I1027 22:41:10.672551  766237 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:41:10.672578  766237 kubeadm.go:319] 
	I1027 22:41:10.672615  766237 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:41:10.672692  766237 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:41:10.672762  766237 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:41:10.672773  766237 kubeadm.go:319] 
	I1027 22:41:10.672862  766237 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:41:10.672873  766237 kubeadm.go:319] 
	I1027 22:41:10.672924  766237 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:41:10.672962  766237 kubeadm.go:319] 
	I1027 22:41:10.673017  766237 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:41:10.673105  766237 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:41:10.673181  766237 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:41:10.673188  766237 kubeadm.go:319] 
	I1027 22:41:10.673308  766237 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:41:10.673433  766237 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:41:10.673453  766237 kubeadm.go:319] 
	I1027 22:41:10.673566  766237 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ae116e.rffapx0bx6ok1lcc \
	I1027 22:41:10.673715  766237 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d \
	I1027 22:41:10.673744  766237 kubeadm.go:319] 	--control-plane 
	I1027 22:41:10.673750  766237 kubeadm.go:319] 
	I1027 22:41:10.673866  766237 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:41:10.673874  766237 kubeadm.go:319] 
	I1027 22:41:10.674002  766237 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ae116e.rffapx0bx6ok1lcc \
	I1027 22:41:10.674155  766237 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d 
	I1027 22:41:10.679802  766237 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 22:41:10.679970  766237 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:41:10.680011  766237 cni.go:84] Creating CNI manager for "calico"
	I1027 22:41:10.681495  766237 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1027 22:41:08.588936  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:09.089816  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:09.589082  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:10.089844  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:10.588937  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:10.694895  764907 kubeadm.go:1114] duration metric: took 4.715364225s to wait for elevateKubeSystemPrivileges
	I1027 22:41:10.694927  764907 kubeadm.go:403] duration metric: took 16.348371447s to StartCluster
	I1027 22:41:10.694962  764907 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:10.695051  764907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:41:10.696039  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:10.696285  764907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 22:41:10.696295  764907 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:41:10.696365  764907 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:41:10.696488  764907 addons.go:69] Setting storage-provisioner=true in profile "kindnet-293335"
	I1027 22:41:10.696513  764907 addons.go:238] Setting addon storage-provisioner=true in "kindnet-293335"
	I1027 22:41:10.696538  764907 config.go:182] Loaded profile config "kindnet-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:41:10.696551  764907 host.go:66] Checking if "kindnet-293335" exists ...
	I1027 22:41:10.696532  764907 addons.go:69] Setting default-storageclass=true in profile "kindnet-293335"
	I1027 22:41:10.696575  764907 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-293335"
	I1027 22:41:10.697010  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:41:10.697041  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:41:10.697981  764907 out.go:179] * Verifying Kubernetes components...
	I1027 22:41:10.699191  764907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:41:10.725465  764907 addons.go:238] Setting addon default-storageclass=true in "kindnet-293335"
	I1027 22:41:10.725518  764907 host.go:66] Checking if "kindnet-293335" exists ...
	I1027 22:41:10.726033  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:41:10.727306  764907 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:41:10.728331  764907 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:41:10.728402  764907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:41:10.728500  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:41:10.763720  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:41:10.765809  764907 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:41:10.765834  764907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:41:10.765936  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:41:10.793766  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:41:10.831675  764907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 22:41:10.883189  764907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:41:10.920886  764907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:41:10.933622  764907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:41:11.140816  764907 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 22:41:11.142961  764907 node_ready.go:35] waiting up to 15m0s for node "kindnet-293335" to be "Ready" ...
	I1027 22:41:11.381413  764907 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 22:41:10.573874  769174 out.go:252]   - Booting up control plane ...
	I1027 22:41:10.573992  769174 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:41:10.574094  769174 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:41:10.574676  769174 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:41:10.589360  769174 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:41:10.589487  769174 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:41:10.597649  769174 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:41:10.598001  769174 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:41:10.598049  769174 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:41:10.766579  769174 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:41:10.767597  769174 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.721626226Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.721651984Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.721669876Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.725209321Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.725233886Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.725251401Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.728922125Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.728974726Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.729000761Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.732559482Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.732581249Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.732604624Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.73613406Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.736158378Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.913569453Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cc2ab518-5182-4f9c-9b36-ba312ddaaa56 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.914462268Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8f563b0e-8982-4fc0-b7a1-099410b98941 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.91555641Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p/dashboard-metrics-scraper" id=f562db44-6138-4016-ba75-0ccdcd8d938d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.915699376Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.922007936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.922757571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.950441439Z" level=info msg="Created container 78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p/dashboard-metrics-scraper" id=f562db44-6138-4016-ba75-0ccdcd8d938d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.95105312Z" level=info msg="Starting container: 78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d" id=b6ce24a8-5f3e-47d9-ac81-0177483be4b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.952855331Z" level=info msg="Started container" PID=1768 containerID=78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p/dashboard-metrics-scraper id=b6ce24a8-5f3e-47d9-ac81-0177483be4b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6130e7efe60c9d745e9003841f305b90f3fb99dd8dd93aef34c48359307f896c
	Oct 27 22:40:44 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:44.032559592Z" level=info msg="Removing container: 6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0" id=e1f5d9bf-d21b-466f-956d-e16dd41f06dd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:40:44 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:44.046732627Z" level=info msg="Removed container 6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p/dashboard-metrics-scraper" id=e1f5d9bf-d21b-466f-956d-e16dd41f06dd name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	78283aebd6f86       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   6130e7efe60c9       dashboard-metrics-scraper-6ffb444bf9-6x67p             kubernetes-dashboard
	943e0d285e380       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   c8910a047e8c5       kubernetes-dashboard-855c9754f9-s2lwd                  kubernetes-dashboard
	827e84f1fab22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Running             storage-provisioner         1                   df59a2d0ff396       storage-provisioner                                    kube-system
	b612efeb97942       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   b3366362daa15       busybox                                                default
	dd925db2f94fb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   a9d2f8707ce3e       coredns-66bc5c9577-bvr8f                               kube-system
	941141ecdf554       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   d14f524395552       kindnet-94cw9                                          kube-system
	dddf4daea9020       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   df59a2d0ff396       storage-provisioner                                    kube-system
	ababe86c36b42       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   0c8569ca3e78c       kube-proxy-42dj4                                       kube-system
	9cda36d13a021       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   947444da57827       etcd-default-k8s-diff-port-927034                      kube-system
	a73ac42016306       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   51c42ac5032a0       kube-scheduler-default-k8s-diff-port-927034            kube-system
	341e84318f679       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   d66f84b438e81       kube-apiserver-default-k8s-diff-port-927034            kube-system
	844da32e0557f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   00ab06f81f16f       kube-controller-manager-default-k8s-diff-port-927034   kube-system
	
	
	==> coredns [dd925db2f94fb591e9c7cb190ecb837b75758b86b30152040595a82ecd10fac3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47352 - 41434 "HINFO IN 5411424138599910356.9208066112809769200. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02854589s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-927034
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-927034
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=default-k8s-diff-port-927034
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_39_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:39:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-927034
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:41:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:40:57 +0000   Mon, 27 Oct 2025 22:39:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:40:57 +0000   Mon, 27 Oct 2025 22:39:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:40:57 +0000   Mon, 27 Oct 2025 22:39:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:40:57 +0000   Mon, 27 Oct 2025 22:40:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-927034
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                bea60602-4e46-4583-a378-a857a2ae88ea
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-bvr8f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-927034                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-94cw9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-927034             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-927034    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-42dj4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-927034             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6x67p              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s2lwd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           113s                 node-controller  Node default-k8s-diff-port-927034 event: Registered Node default-k8s-diff-port-927034 in Controller
	  Normal  NodeReady                100s                 kubelet          Node default-k8s-diff-port-927034 status is now: NodeReady
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 60s)    kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 60s)    kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 60s)    kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                  node-controller  Node default-k8s-diff-port-927034 event: Registered Node default-k8s-diff-port-927034 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [9cda36d13a02141502e61a8f0bd69b14fb79ac20826af4e9365b17402d4e4467] <==
	{"level":"warn","ts":"2025-10-27T22:40:15.442779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.452677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.460455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.470090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.477574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.484614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.491853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.498712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.506034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.516452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.524734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.532345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.539334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.557292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.560990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.568004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.575710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.621578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60688","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T22:40:47.442176Z","caller":"traceutil/trace.go:172","msg":"trace[1546843057] transaction","detail":"{read_only:false; response_revision:662; number_of_response:1; }","duration":"130.393ms","start":"2025-10-27T22:40:47.311755Z","end":"2025-10-27T22:40:47.442148Z","steps":["trace[1546843057] 'process raft request'  (duration: 63.321958ms)","trace[1546843057] 'compare'  (duration: 66.951846ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:40:56.552441Z","caller":"traceutil/trace.go:172","msg":"trace[685820210] transaction","detail":"{read_only:false; response_revision:671; number_of_response:1; }","duration":"157.012786ms","start":"2025-10-27T22:40:56.395408Z","end":"2025-10-27T22:40:56.552421Z","steps":["trace[685820210] 'process raft request'  (duration: 128.686703ms)","trace[685820210] 'compare'  (duration: 28.218527ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:40:56.555508Z","caller":"traceutil/trace.go:172","msg":"trace[1557955267] transaction","detail":"{read_only:false; response_revision:673; number_of_response:1; }","duration":"158.850601ms","start":"2025-10-27T22:40:56.396643Z","end":"2025-10-27T22:40:56.555493Z","steps":["trace[1557955267] 'process raft request'  (duration: 158.791802ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:40:56.555511Z","caller":"traceutil/trace.go:172","msg":"trace[1132699416] transaction","detail":"{read_only:false; response_revision:672; number_of_response:1; }","duration":"159.977305ms","start":"2025-10-27T22:40:56.395522Z","end":"2025-10-27T22:40:56.555499Z","steps":["trace[1132699416] 'process raft request'  (duration: 159.815947ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:40:56.736980Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.496903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T22:40:56.737158Z","caller":"traceutil/trace.go:172","msg":"trace[2005717014] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:673; }","duration":"122.723902ms","start":"2025-10-27T22:40:56.614413Z","end":"2025-10-27T22:40:56.737137Z","steps":["trace[2005717014] 'agreement among raft nodes before linearized reading'  (duration: 80.647457ms)","trace[2005717014] 'range keys from in-memory index tree'  (duration: 41.821969ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:40:56.737482Z","caller":"traceutil/trace.go:172","msg":"trace[1402834480] transaction","detail":"{read_only:false; response_revision:674; number_of_response:1; }","duration":"175.407473ms","start":"2025-10-27T22:40:56.562048Z","end":"2025-10-27T22:40:56.737456Z","steps":["trace[1402834480] 'process raft request'  (duration: 133.067879ms)","trace[1402834480] 'compare'  (duration: 41.823297ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:41:13 up  2:23,  0 user,  load average: 4.89, 3.46, 3.02
	Linux default-k8s-diff-port-927034 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [941141ecdf5542a303eff7ec706390c2f855de75447f8261b3667f38a2495d01] <==
	I1027 22:40:17.517739       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:40:17.518009       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1027 22:40:17.518146       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:40:17.518164       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:40:17.518174       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:40:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:40:17.717288       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:40:17.816402       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:40:17.816562       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:40:17.816810       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:40:18.216995       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:40:18.217026       1 metrics.go:72] Registering metrics
	I1027 22:40:18.217095       1 controller.go:711] "Syncing nftables rules"
	I1027 22:40:27.717078       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:40:27.717137       1 main.go:301] handling current node
	I1027 22:40:37.723207       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:40:37.723236       1 main.go:301] handling current node
	I1027 22:40:47.717057       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:40:47.717100       1 main.go:301] handling current node
	I1027 22:40:57.718221       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:40:57.718259       1 main.go:301] handling current node
	I1027 22:41:07.726030       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:41:07.726068       1 main.go:301] handling current node
	
	
	==> kube-apiserver [341e84318f679f97a704241f45d9cfde3d9e2e8695ec44c4ff77dcb1b0fb2385] <==
	I1027 22:40:16.128715       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 22:40:16.128723       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:40:16.128729       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:40:16.137453       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:40:16.137500       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 22:40:16.137542       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 22:40:16.137658       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 22:40:16.137694       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 22:40:16.145175       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:40:16.178724       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 22:40:16.190903       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 22:40:16.190938       1 policy_source.go:240] refreshing policies
	I1027 22:40:16.196629       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:40:16.226472       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 22:40:16.423007       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:40:16.455836       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:40:16.476090       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:40:16.482444       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:40:16.489744       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:40:16.523823       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.45.67"}
	I1027 22:40:16.534123       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.37.143"}
	I1027 22:40:17.030253       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:40:19.267876       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:40:19.366617       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:40:19.417242       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [844da32e0557faa56becf52073bd2e1d4107c6dcd6a6994bf7b807ec687a20df] <==
	I1027 22:40:18.824056       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:40:18.826660       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 22:40:18.827903       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:40:18.829926       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:40:18.848414       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 22:40:18.850643       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:40:18.852880       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 22:40:18.855133       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 22:40:18.863795       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:40:18.863888       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:40:18.863918       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 22:40:18.863980       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 22:40:18.864085       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:40:18.864152       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:40:18.864153       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 22:40:18.864269       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:40:18.864936       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:40:18.867119       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 22:40:18.869357       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:40:18.869363       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 22:40:18.871622       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 22:40:18.873822       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:40:18.876067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 22:40:18.878316       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 22:40:18.888638       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ababe86c36b425bd0273434f7b483138971716fbdf50f44c100e55918006dcfb] <==
	I1027 22:40:17.317309       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:40:17.379061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:40:17.479279       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:40:17.479335       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1027 22:40:17.479466       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:40:17.500486       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:40:17.500544       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:40:17.506556       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:40:17.506992       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:40:17.507040       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:40:17.508322       1 config.go:200] "Starting service config controller"
	I1027 22:40:17.508405       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:40:17.508415       1 config.go:309] "Starting node config controller"
	I1027 22:40:17.508449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:40:17.508458       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:40:17.508476       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:40:17.508487       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:40:17.508497       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:40:17.508506       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:40:17.609568       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:40:17.609608       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:40:17.609562       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a73ac42016306256e53333754b058b687911ab56a58a53efba33e2650ed7f3c4] <==
	I1027 22:40:15.276232       1 serving.go:386] Generated self-signed cert in-memory
	W1027 22:40:16.055355       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:40:16.055402       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:40:16.055419       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:40:16.055428       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:40:16.132799       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:40:16.133457       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:40:16.136588       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:40:16.136676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:40:16.139040       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:40:16.139103       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:40:16.237205       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:40:19 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:19.482410     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a81bcd0c-04cb-409e-aad0-b5a2fa67a094-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-s2lwd\" (UID: \"a81bcd0c-04cb-409e-aad0-b5a2fa67a094\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s2lwd"
	Oct 27 22:40:19 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:19.482435     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x49nf\" (UniqueName: \"kubernetes.io/projected/a81bcd0c-04cb-409e-aad0-b5a2fa67a094-kube-api-access-x49nf\") pod \"kubernetes-dashboard-855c9754f9-s2lwd\" (UID: \"a81bcd0c-04cb-409e-aad0-b5a2fa67a094\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s2lwd"
	Oct 27 22:40:19 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:19.482453     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/37064add-e8da-40e3-9610-90576ff56b3b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-6x67p\" (UID: \"37064add-e8da-40e3-9610-90576ff56b3b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p"
	Oct 27 22:40:22 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:22.961797     724 scope.go:117] "RemoveContainer" containerID="158e3a0428f441cf6d1f1cf2bd69b5b147d55f5f9a74339253a024356b7d9556"
	Oct 27 22:40:23 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:23.966319     724 scope.go:117] "RemoveContainer" containerID="158e3a0428f441cf6d1f1cf2bd69b5b147d55f5f9a74339253a024356b7d9556"
	Oct 27 22:40:23 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:23.966491     724 scope.go:117] "RemoveContainer" containerID="6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0"
	Oct 27 22:40:23 default-k8s-diff-port-927034 kubelet[724]: E1027 22:40:23.966690     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:40:24 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:24.970798     724 scope.go:117] "RemoveContainer" containerID="6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0"
	Oct 27 22:40:24 default-k8s-diff-port-927034 kubelet[724]: E1027 22:40:24.971045     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:40:26 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:26.345531     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 22:40:26 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:26.987459     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s2lwd" podStartSLOduration=1.550663664 podStartE2EDuration="7.987436843s" podCreationTimestamp="2025-10-27 22:40:19 +0000 UTC" firstStartedPulling="2025-10-27 22:40:19.659453813 +0000 UTC m=+5.832836512" lastFinishedPulling="2025-10-27 22:40:26.096227003 +0000 UTC m=+12.269609691" observedRunningTime="2025-10-27 22:40:26.987434735 +0000 UTC m=+13.160817441" watchObservedRunningTime="2025-10-27 22:40:26.987436843 +0000 UTC m=+13.160819549"
	Oct 27 22:40:30 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:30.222432     724 scope.go:117] "RemoveContainer" containerID="6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0"
	Oct 27 22:40:30 default-k8s-diff-port-927034 kubelet[724]: E1027 22:40:30.223081     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:40:43 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:43.913131     724 scope.go:117] "RemoveContainer" containerID="6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0"
	Oct 27 22:40:44 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:44.031077     724 scope.go:117] "RemoveContainer" containerID="6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0"
	Oct 27 22:40:44 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:44.031365     724 scope.go:117] "RemoveContainer" containerID="78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d"
	Oct 27 22:40:44 default-k8s-diff-port-927034 kubelet[724]: E1027 22:40:44.031598     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:40:50 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:50.222399     724 scope.go:117] "RemoveContainer" containerID="78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d"
	Oct 27 22:40:50 default-k8s-diff-port-927034 kubelet[724]: E1027 22:40:50.222652     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:41:00 default-k8s-diff-port-927034 kubelet[724]: I1027 22:41:00.912752     724 scope.go:117] "RemoveContainer" containerID="78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d"
	Oct 27 22:41:00 default-k8s-diff-port-927034 kubelet[724]: E1027 22:41:00.913860     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:41:10 default-k8s-diff-port-927034 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:41:10 default-k8s-diff-port-927034 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:41:10 default-k8s-diff-port-927034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 22:41:10 default-k8s-diff-port-927034 systemd[1]: kubelet.service: Consumed 1.812s CPU time.
	
	
	==> kubernetes-dashboard [943e0d285e380306579142f00ea866adbc1a6d3e36fe8de0c8f3a0cfa6d58fda] <==
	2025/10/27 22:40:26 Using namespace: kubernetes-dashboard
	2025/10/27 22:40:26 Using in-cluster config to connect to apiserver
	2025/10/27 22:40:26 Using secret token for csrf signing
	2025/10/27 22:40:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 22:40:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 22:40:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 22:40:26 Generating JWE encryption key
	2025/10/27 22:40:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 22:40:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 22:40:26 Initializing JWE encryption key from synchronized object
	2025/10/27 22:40:26 Creating in-cluster Sidecar client
	2025/10/27 22:40:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:40:26 Serving insecurely on HTTP port: 9090
	2025/10/27 22:40:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:40:26 Starting overwatch
	
	
	==> storage-provisioner [827e84f1fab22b15e97cd49ea5930dc974a7849de6da28521576edd02930da17] <==
	W1027 22:40:49.555643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:51.559132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:51.572225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:53.575707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:53.579687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:55.583638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:55.643160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:57.646226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:57.651416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:59.655080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:59.659878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:01.663789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:01.668692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:03.673129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:03.678579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:05.682279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:05.688966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:07.692572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:07.697571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:09.700830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:09.705811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:11.716794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:11.732196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:13.737781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:13.745354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dddf4daea9020cf289743053ebca403400a4f7513ff226a3edfb5fc2caf01a72] <==
	I1027 22:40:17.289626       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 22:40:17.291920       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034: exit status 2 (413.650869ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-927034 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-927034
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-927034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a",
	        "Created": "2025-10-27T22:39:00.365066876Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 753941,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:40:07.686275778Z",
	            "FinishedAt": "2025-10-27T22:40:06.677682314Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/hosts",
	        "LogPath": "/var/lib/docker/containers/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a/d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a-json.log",
	        "Name": "/default-k8s-diff-port-927034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-927034:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-927034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0fdd499dd47ff546d6602e63f8ea034b9aee510f75c724f21fa092324dd241a",
	                "LowerDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb-init/diff:/var/lib/docker/overlay2/aa40bcae7c1d6af30e06ce1096f753f0fae2ea9c2d1b005e5be5221105c74101/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3bc83a3b634fab18fb085ab32d1d7e8afc6e677fdfcd3460fb5d113ff1c475bb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-927034",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-927034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-927034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-927034",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-927034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9f068dfb11d0f58e080b8853e862fb40d0205711c5deaa2d6ca1996c706d09d",
	            "SandboxKey": "/var/run/docker/netns/a9f068dfb11d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-927034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:3a:ec:7b:df:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "25e72b99ac2bb46615ab3180c2d17b65b027e144e1892b4833bd16fb1b4eb32a",
	                    "EndpointID": "d9d6b22e147667ac4a9b899d3f00c3babf3075afe9047b3ed59d797c37fced52",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-927034",
	                        "d0fdd499dd47"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034: exit status 2 (370.282754ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-927034 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-927034 logs -n 25: (1.496183716s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-293335 sudo systemctl cat docker --no-pager                                                                                                                │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo docker system info                                                                                                                             │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ delete  │ -p embed-certs-829976                                                                                                                                              │ embed-certs-829976           │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ start   │ -p kindnet-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                           │ kindnet-293335               │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cri-dockerd --version                                                                                                                          │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ delete  │ -p newest-cni-290425                                                                                                                                               │ newest-cni-290425            │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p calico-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-293335                │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ ssh     │ -p auto-293335 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo containerd config dump                                                                                                                         │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ ssh     │ -p auto-293335 sudo crio config                                                                                                                                    │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ delete  │ -p auto-293335                                                                                                                                                     │ auto-293335                  │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │ 27 Oct 25 22:40 UTC │
	│ start   │ -p custom-flannel-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-293335        │ jenkins │ v1.37.0 │ 27 Oct 25 22:40 UTC │                     │
	│ image   │ default-k8s-diff-port-927034 image list --format=json                                                                                                              │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:41 UTC │ 27 Oct 25 22:41 UTC │
	│ pause   │ -p default-k8s-diff-port-927034 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-927034 │ jenkins │ v1.37.0 │ 27 Oct 25 22:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:40:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:40:51.704540  769174 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:40:51.704910  769174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:51.704924  769174 out.go:374] Setting ErrFile to fd 2...
	I1027 22:40:51.704932  769174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:40:51.705278  769174 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:40:51.706302  769174 out.go:368] Setting JSON to false
	I1027 22:40:51.708157  769174 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8591,"bootTime":1761596261,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:40:51.708260  769174 start.go:143] virtualization: kvm guest
	I1027 22:40:51.710046  769174 out.go:179] * [custom-flannel-293335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:40:51.711512  769174 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:40:51.711556  769174 notify.go:221] Checking for updates...
	I1027 22:40:51.713429  769174 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:40:51.714559  769174 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:40:51.716536  769174 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:40:51.717688  769174 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:40:51.718762  769174 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:40:51.720331  769174 config.go:182] Loaded profile config "calico-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:51.720469  769174 config.go:182] Loaded profile config "default-k8s-diff-port-927034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:51.720610  769174 config.go:182] Loaded profile config "kindnet-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:51.720715  769174 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:40:51.748412  769174 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:40:51.748510  769174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:51.813919  769174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:80 SystemTime:2025-10-27 22:40:51.803177553 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:51.814113  769174 docker.go:318] overlay module found
	I1027 22:40:51.815601  769174 out.go:179] * Using the docker driver based on user configuration
	I1027 22:40:51.816553  769174 start.go:307] selected driver: docker
	I1027 22:40:51.816577  769174 start.go:928] validating driver "docker" against <nil>
	I1027 22:40:51.816599  769174 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:40:51.817288  769174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:40:51.894340  769174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-27 22:40:51.882710033 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:40:51.894603  769174 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:40:51.894892  769174 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:40:51.898473  769174 out.go:179] * Using Docker driver with root privileges
	I1027 22:40:51.899513  769174 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1027 22:40:51.899555  769174 start_flags.go:335] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1027 22:40:51.899664  769174 start.go:351] cluster config:
	{Name:custom-flannel-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:51.900932  769174 out.go:179] * Starting "custom-flannel-293335" primary control-plane node in "custom-flannel-293335" cluster
	I1027 22:40:51.902454  769174 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:40:51.903618  769174 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:40:51.904878  769174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:51.904925  769174 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 22:40:51.904930  769174 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:40:51.904962  769174 cache.go:59] Caching tarball of preloaded images
	I1027 22:40:51.905086  769174 preload.go:233] Found /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 22:40:51.905105  769174 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:40:51.905238  769174 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/config.json ...
	I1027 22:40:51.905265  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/config.json: {Name:mk3ce478049d79270c8b348738fd744d03d55050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:51.930034  769174 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:40:51.930062  769174 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:40:51.930084  769174 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:40:51.930116  769174 start.go:360] acquireMachinesLock for custom-flannel-293335: {Name:mk8bc4d416d94d524af58772a15b2831e6e4bb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:40:51.930224  769174 start.go:364] duration metric: took 85.39µs to acquireMachinesLock for "custom-flannel-293335"
	I1027 22:40:51.930257  769174 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-293335 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:40:51.930356  769174 start.go:125] createHost starting for "" (driver="docker")
	W1027 22:40:49.688058  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	W1027 22:40:51.689800  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:48.240140  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Running}}
	I1027 22:40:48.267417  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:40:48.292741  764907 cli_runner.go:164] Run: docker exec kindnet-293335 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:40:48.343720  764907 oci.go:144] the created container "kindnet-293335" has a running status.
	I1027 22:40:48.343767  764907 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa...
	I1027 22:40:49.180234  764907 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:40:49.290478  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:40:49.308149  764907 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:40:49.308178  764907 kic_runner.go:114] Args: [docker exec --privileged kindnet-293335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:40:49.357719  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:40:49.376767  764907 machine.go:94] provisionDockerMachine start ...
	I1027 22:40:49.376854  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:49.395082  764907 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:49.395364  764907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1027 22:40:49.395382  764907 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:40:49.538199  764907 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-293335
	
	I1027 22:40:49.538241  764907 ubuntu.go:182] provisioning hostname "kindnet-293335"
	I1027 22:40:49.538315  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:49.559538  764907 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:49.559759  764907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1027 22:40:49.559773  764907 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-293335 && echo "kindnet-293335" | sudo tee /etc/hostname
	I1027 22:40:49.777617  764907 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-293335
	
	I1027 22:40:49.777738  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:49.799981  764907 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:49.800245  764907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1027 22:40:49.800272  764907 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-293335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-293335/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-293335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:40:49.943822  764907 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:40:49.943852  764907 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:40:49.943915  764907 ubuntu.go:190] setting up certificates
	I1027 22:40:49.943928  764907 provision.go:84] configureAuth start
	I1027 22:40:49.943993  764907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-293335
	I1027 22:40:49.960685  764907 provision.go:143] copyHostCerts
	I1027 22:40:49.960752  764907 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:40:49.960768  764907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:40:49.975047  764907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:40:49.975199  764907 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:40:49.975214  764907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:40:49.975269  764907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:40:49.975374  764907 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:40:49.975387  764907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:40:49.975425  764907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:40:49.975502  764907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.kindnet-293335 san=[127.0.0.1 192.168.85.2 kindnet-293335 localhost minikube]
	I1027 22:40:50.076492  764907 provision.go:177] copyRemoteCerts
	I1027 22:40:50.076547  764907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:40:50.076582  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:50.098993  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:40:50.205142  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:40:50.229537  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1027 22:40:50.269600  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:40:50.288919  764907 provision.go:87] duration metric: took 344.974229ms to configureAuth
	I1027 22:40:50.288976  764907 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:40:50.289173  764907 config.go:182] Loaded profile config "kindnet-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:50.289297  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:50.308506  764907 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:50.308791  764907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1027 22:40:50.308816  764907 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:40:50.805862  764907 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:40:50.805909  764907 machine.go:97] duration metric: took 1.429120825s to provisionDockerMachine
	I1027 22:40:50.805922  764907 client.go:176] duration metric: took 7.420685863s to LocalClient.Create
	I1027 22:40:50.805965  764907 start.go:167] duration metric: took 7.420742159s to libmachine.API.Create "kindnet-293335"
	I1027 22:40:50.805978  764907 start.go:293] postStartSetup for "kindnet-293335" (driver="docker")
	I1027 22:40:50.805992  764907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:40:50.806051  764907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:40:50.806096  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:50.824020  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:40:50.930823  764907 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:40:50.935313  764907 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:40:50.935351  764907 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:40:50.935366  764907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:40:50.935430  764907 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:40:50.935552  764907 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:40:50.935685  764907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:40:50.944597  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:50.968709  764907 start.go:296] duration metric: took 162.705731ms for postStartSetup
	I1027 22:40:50.969161  764907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-293335
	I1027 22:40:50.987389  764907 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/config.json ...
	I1027 22:40:50.987645  764907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:40:50.987696  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:51.007650  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:40:51.107441  764907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:40:51.112445  764907 start.go:128] duration metric: took 7.729036968s to createHost
	I1027 22:40:51.112480  764907 start.go:83] releasing machines lock for "kindnet-293335", held for 7.729154496s
	I1027 22:40:51.112557  764907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-293335
	I1027 22:40:51.130544  764907 ssh_runner.go:195] Run: cat /version.json
	I1027 22:40:51.130633  764907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:40:51.130650  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:51.130716  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:40:51.151229  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:40:51.151270  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:40:51.255443  764907 ssh_runner.go:195] Run: systemctl --version
	I1027 22:40:51.314740  764907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:40:51.353700  764907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:40:51.359333  764907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:40:51.359416  764907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:40:51.448089  764907 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:40:51.448114  764907 start.go:496] detecting cgroup driver to use...
	I1027 22:40:51.448148  764907 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:40:51.448193  764907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:40:51.467289  764907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:40:51.486592  764907 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:40:51.486658  764907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:40:51.509280  764907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:40:51.529771  764907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:40:51.630099  764907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:40:51.746804  764907 docker.go:234] disabling docker service ...
	I1027 22:40:51.746872  764907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:40:51.769835  764907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:40:51.787656  764907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:40:51.900922  764907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:40:52.031237  764907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:40:52.053455  764907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:40:52.068525  764907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:40:52.069028  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.081877  764907 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:40:52.081939  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.091841  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.102178  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.114004  764907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:40:52.125359  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.138152  764907 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.159281  764907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:52.170137  764907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:40:52.179202  764907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:40:52.200523  764907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:52.318765  764907 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:40:52.456883  764907 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:40:52.456981  764907 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:40:52.461403  764907 start.go:564] Will wait 60s for crictl version
	I1027 22:40:52.461473  764907 ssh_runner.go:195] Run: which crictl
	I1027 22:40:52.466594  764907 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:40:52.499358  764907 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:40:52.499451  764907 ssh_runner.go:195] Run: crio --version
	I1027 22:40:52.529884  764907 ssh_runner.go:195] Run: crio --version
	I1027 22:40:52.570385  764907 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:40:52.571747  764907 cli_runner.go:164] Run: docker network inspect kindnet-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:40:52.591547  764907 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 22:40:52.596552  764907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:52.608285  764907 kubeadm.go:884] updating cluster {Name:kindnet-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:40:52.608439  764907 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:52.608505  764907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:52.647446  764907 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:52.647472  764907 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:40:52.647529  764907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:52.682525  764907 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:52.682551  764907 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:40:52.682560  764907 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 22:40:52.682730  764907 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-293335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1027 22:40:52.682821  764907 ssh_runner.go:195] Run: crio config
	I1027 22:40:52.750528  764907 cni.go:84] Creating CNI manager for "kindnet"
	I1027 22:40:52.750574  764907 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:40:52.750604  764907 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-293335 NodeName:kindnet-293335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:40:52.750787  764907 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-293335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:40:52.750862  764907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:40:52.760518  764907 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:40:52.760585  764907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:40:52.770536  764907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1027 22:40:52.785111  764907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:40:52.801466  764907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1027 22:40:52.815738  764907 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:40:52.819968  764907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:52.830160  764907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:52.915668  764907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:52.949032  764907 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335 for IP: 192.168.85.2
	I1027 22:40:52.949057  764907 certs.go:195] generating shared ca certs ...
	I1027 22:40:52.949079  764907 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:52.949252  764907 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:40:52.949303  764907 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:40:52.949316  764907 certs.go:257] generating profile certs ...
	I1027 22:40:52.949391  764907 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.key
	I1027 22:40:52.949408  764907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.crt with IP's: []
	I1027 22:40:51.446634  766237 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-293335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.367722896s)
	I1027 22:40:51.446674  766237 kic.go:203] duration metric: took 3.367867287s to extract preloaded images to volume ...
	W1027 22:40:51.446771  766237 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 22:40:51.446821  766237 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 22:40:51.446876  766237 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:40:51.510996  766237 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-293335 --name calico-293335 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-293335 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-293335 --network calico-293335 --ip 192.168.76.2 --volume calico-293335:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:40:51.865318  766237 cli_runner.go:164] Run: docker container inspect calico-293335 --format={{.State.Running}}
	I1027 22:40:51.888864  766237 cli_runner.go:164] Run: docker container inspect calico-293335 --format={{.State.Status}}
	I1027 22:40:51.909765  766237 cli_runner.go:164] Run: docker exec calico-293335 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:40:51.969116  766237 oci.go:144] the created container "calico-293335" has a running status.
	I1027 22:40:51.969162  766237 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa...
	I1027 22:40:52.208036  766237 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:40:52.258322  766237 cli_runner.go:164] Run: docker container inspect calico-293335 --format={{.State.Status}}
	I1027 22:40:52.280060  766237 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:40:52.280087  766237 kic_runner.go:114] Args: [docker exec --privileged calico-293335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:40:52.340055  766237 cli_runner.go:164] Run: docker container inspect calico-293335 --format={{.State.Status}}
	I1027 22:40:52.359580  766237 machine.go:94] provisionDockerMachine start ...
	I1027 22:40:52.359688  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:52.380278  766237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:52.380639  766237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1027 22:40:52.380668  766237 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:40:52.537095  766237 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-293335
	
	I1027 22:40:52.537131  766237 ubuntu.go:182] provisioning hostname "calico-293335"
	I1027 22:40:52.537200  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:52.561910  766237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:52.562445  766237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1027 22:40:52.562472  766237 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-293335 && echo "calico-293335" | sudo tee /etc/hostname
	I1027 22:40:52.734619  766237 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-293335
	
	I1027 22:40:52.734707  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:52.753676  766237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:52.753995  766237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1027 22:40:52.754027  766237 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-293335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-293335/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-293335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:40:52.903843  766237 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:40:52.903874  766237 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:40:52.903897  766237 ubuntu.go:190] setting up certificates
	I1027 22:40:52.903912  766237 provision.go:84] configureAuth start
	I1027 22:40:52.904021  766237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-293335
	I1027 22:40:52.925391  766237 provision.go:143] copyHostCerts
	I1027 22:40:52.925465  766237 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:40:52.925491  766237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:40:52.925589  766237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:40:52.925729  766237 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:40:52.925742  766237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:40:52.925785  766237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:40:52.925891  766237 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:40:52.925902  766237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:40:52.925938  766237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:40:52.926063  766237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.calico-293335 san=[127.0.0.1 192.168.76.2 calico-293335 localhost minikube]
	I1027 22:40:53.029282  766237 provision.go:177] copyRemoteCerts
	I1027 22:40:53.029337  766237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:40:53.029370  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.046989  766237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa Username:docker}
	I1027 22:40:53.149786  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:40:53.170180  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 22:40:53.188150  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:40:53.205256  766237 provision.go:87] duration metric: took 301.326822ms to configureAuth
	I1027 22:40:53.205285  766237 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:40:53.205439  766237 config.go:182] Loaded profile config "calico-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:40:53.205592  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.224194  766237 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:53.224503  766237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1027 22:40:53.224543  766237 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:40:53.497979  766237 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:40:53.498007  766237 machine.go:97] duration metric: took 1.138401268s to provisionDockerMachine
	I1027 22:40:53.498017  766237 client.go:176] duration metric: took 8.239974406s to LocalClient.Create
	I1027 22:40:53.498040  766237 start.go:167] duration metric: took 8.240050323s to libmachine.API.Create "calico-293335"
	I1027 22:40:53.498051  766237 start.go:293] postStartSetup for "calico-293335" (driver="docker")
	I1027 22:40:53.498064  766237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:40:53.498128  766237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:40:53.498195  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.519767  766237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa Username:docker}
	I1027 22:40:53.623886  766237 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:40:53.627827  766237 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:40:53.627860  766237 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:40:53.627873  766237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:40:53.627917  766237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:40:53.628068  766237 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:40:53.628204  766237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:40:53.636125  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:53.656207  766237 start.go:296] duration metric: took 158.142832ms for postStartSetup
	I1027 22:40:53.656553  766237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-293335
	I1027 22:40:53.676775  766237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/config.json ...
	I1027 22:40:53.677102  766237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:40:53.677159  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.699406  766237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa Username:docker}
	I1027 22:40:53.801467  766237 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:40:53.806541  766237 start.go:128] duration metric: took 8.550734809s to createHost
	I1027 22:40:53.806573  766237 start.go:83] releasing machines lock for "calico-293335", held for 8.550920879s
	I1027 22:40:53.806657  766237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-293335
	I1027 22:40:53.824637  766237 ssh_runner.go:195] Run: cat /version.json
	I1027 22:40:53.824692  766237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:40:53.824705  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.824760  766237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-293335
	I1027 22:40:53.843390  766237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa Username:docker}
	I1027 22:40:53.845254  766237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/calico-293335/id_rsa Username:docker}
	I1027 22:40:54.002508  766237 ssh_runner.go:195] Run: systemctl --version
	I1027 22:40:54.009727  766237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:40:54.048990  766237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:40:54.054419  766237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:40:54.054478  766237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:40:54.082233  766237 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:40:54.082261  766237 start.go:496] detecting cgroup driver to use...
	I1027 22:40:54.082295  766237 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:40:54.082361  766237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:40:54.101079  766237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:40:54.113791  766237 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:40:54.113854  766237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:40:54.132045  766237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:40:54.151507  766237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:40:54.259215  766237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:40:54.380935  766237 docker.go:234] disabling docker service ...
	I1027 22:40:54.381026  766237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:40:54.402082  766237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:40:54.415939  766237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:40:54.524038  766237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:40:54.630569  766237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:40:54.643844  766237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:40:54.659000  766237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:40:54.659073  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.671148  766237 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:40:54.671216  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.681376  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.691671  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.700257  766237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:40:54.708994  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.718844  766237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.734139  766237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:40:54.743246  766237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:40:54.751043  766237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:40:54.758461  766237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:54.846192  766237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:40:51.932890  769174 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 22:40:51.933235  769174 start.go:159] libmachine.API.Create for "custom-flannel-293335" (driver="docker")
	I1027 22:40:51.933279  769174 client.go:173] LocalClient.Create starting
	I1027 22:40:51.933368  769174 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem
	I1027 22:40:51.933418  769174 main.go:143] libmachine: Decoding PEM data...
	I1027 22:40:51.933444  769174 main.go:143] libmachine: Parsing certificate...
	I1027 22:40:51.933542  769174 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem
	I1027 22:40:51.933578  769174 main.go:143] libmachine: Decoding PEM data...
	I1027 22:40:51.933591  769174 main.go:143] libmachine: Parsing certificate...
	I1027 22:40:51.934061  769174 cli_runner.go:164] Run: docker network inspect custom-flannel-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:40:51.957619  769174 cli_runner.go:211] docker network inspect custom-flannel-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:40:51.957682  769174 network_create.go:284] running [docker network inspect custom-flannel-293335] to gather additional debugging logs...
	I1027 22:40:51.957738  769174 cli_runner.go:164] Run: docker network inspect custom-flannel-293335
	W1027 22:40:51.978287  769174 cli_runner.go:211] docker network inspect custom-flannel-293335 returned with exit code 1
	I1027 22:40:51.978331  769174 network_create.go:287] error running [docker network inspect custom-flannel-293335]: docker network inspect custom-flannel-293335: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-293335 not found
	I1027 22:40:51.978351  769174 network_create.go:289] output of [docker network inspect custom-flannel-293335]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-293335 not found
	
	** /stderr **
	I1027 22:40:51.978500  769174 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:40:51.998776  769174 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
	I1027 22:40:51.999808  769174 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b2deffb37428 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:63:99:4f:c9:29} reservation:<nil>}
	I1027 22:40:52.000370  769174 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8aa1ad217c0a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:19:7b:f4:de:20} reservation:<nil>}
	I1027 22:40:52.001238  769174 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4ce6e82cd489 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:d8:04:42:9a:06} reservation:<nil>}
	I1027 22:40:52.002080  769174 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-608fda872b8d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ea:32:7c:76:35:72} reservation:<nil>}
	I1027 22:40:52.003151  769174 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e5c6f0}
	I1027 22:40:52.003184  769174 network_create.go:124] attempt to create docker network custom-flannel-293335 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1027 22:40:52.003252  769174 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-293335 custom-flannel-293335
	I1027 22:40:52.073623  769174 network_create.go:108] docker network custom-flannel-293335 192.168.94.0/24 created
	I1027 22:40:52.073653  769174 kic.go:121] calculated static IP "192.168.94.2" for the "custom-flannel-293335" container
	I1027 22:40:52.073720  769174 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:40:52.092023  769174 cli_runner.go:164] Run: docker volume create custom-flannel-293335 --label name.minikube.sigs.k8s.io=custom-flannel-293335 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:40:52.113578  769174 oci.go:103] Successfully created a docker volume custom-flannel-293335
	I1027 22:40:52.113659  769174 cli_runner.go:164] Run: docker run --rm --name custom-flannel-293335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-293335 --entrypoint /usr/bin/test -v custom-flannel-293335:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:40:52.581594  769174 oci.go:107] Successfully prepared a docker volume custom-flannel-293335
	I1027 22:40:52.581652  769174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:52.581679  769174 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:40:52.581760  769174 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-293335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:40:57.183824  766237 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.337587879s)
	I1027 22:40:57.183872  766237 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:40:57.183934  766237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:40:57.188627  766237 start.go:564] Will wait 60s for crictl version
	I1027 22:40:57.188672  766237 ssh_runner.go:195] Run: which crictl
	I1027 22:40:57.192665  766237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:40:57.220069  766237 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:40:57.220133  766237 ssh_runner.go:195] Run: crio --version
	I1027 22:40:57.252385  766237 ssh_runner.go:195] Run: crio --version
	I1027 22:40:57.290685  766237 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 22:40:54.194496  753728 pod_ready.go:104] pod "coredns-66bc5c9577-bvr8f" is not "Ready", error: <nil>
	I1027 22:40:56.741030  753728 pod_ready.go:94] pod "coredns-66bc5c9577-bvr8f" is "Ready"
	I1027 22:40:56.741063  753728 pod_ready.go:86] duration metric: took 39.05899716s for pod "coredns-66bc5c9577-bvr8f" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.743363  753728 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.747077  753728 pod_ready.go:94] pod "etcd-default-k8s-diff-port-927034" is "Ready"
	I1027 22:40:56.747096  753728 pod_ready.go:86] duration metric: took 3.70853ms for pod "etcd-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.748964  753728 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.752297  753728 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-927034" is "Ready"
	I1027 22:40:56.752319  753728 pod_ready.go:86] duration metric: took 3.336424ms for pod "kube-apiserver-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.753900  753728 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:56.964146  753728 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-927034" is "Ready"
	I1027 22:40:56.964180  753728 pod_ready.go:86] duration metric: took 210.259581ms for pod "kube-controller-manager-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:57.085513  753728 pod_ready.go:83] waiting for pod "kube-proxy-42dj4" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:57.485865  753728 pod_ready.go:94] pod "kube-proxy-42dj4" is "Ready"
	I1027 22:40:57.485893  753728 pod_ready.go:86] duration metric: took 400.353128ms for pod "kube-proxy-42dj4" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:57.690363  753728 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:58.085850  753728 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-927034" is "Ready"
	I1027 22:40:58.085884  753728 pod_ready.go:86] duration metric: took 395.468301ms for pod "kube-scheduler-default-k8s-diff-port-927034" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:40:58.085898  753728 pod_ready.go:40] duration metric: took 40.407360832s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:40:58.136014  753728 start.go:626] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 22:40:58.137054  753728 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-927034" cluster and "default" namespace by default
	I1027 22:40:53.234463  764907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.crt ...
	I1027 22:40:53.234492  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.crt: {Name:mk8a2a4e4b1b7f25a50930365fa42a1aeaf808e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.234671  764907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.key ...
	I1027 22:40:53.234686  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/client.key: {Name:mk17fe7e0b8300369bdd6fde5af683c8c3797d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.234794  764907 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key.bc0cab6b
	I1027 22:40:53.234810  764907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt.bc0cab6b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 22:40:53.747227  764907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt.bc0cab6b ...
	I1027 22:40:53.747254  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt.bc0cab6b: {Name:mkacd917aaf5e9d405b91cc9b91d2556a5c51006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.747438  764907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key.bc0cab6b ...
	I1027 22:40:53.747456  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key.bc0cab6b: {Name:mka5d7a523b0434d926752f0e350c36c00626981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.747559  764907 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt.bc0cab6b -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt
	I1027 22:40:53.747677  764907 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key.bc0cab6b -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key
	I1027 22:40:53.747761  764907 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.key
	I1027 22:40:53.747784  764907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.crt with IP's: []
	I1027 22:40:53.867931  764907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.crt ...
	I1027 22:40:53.867971  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.crt: {Name:mkcb8d2bf0f39ac86fa2fc73b388ddace10578a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.868164  764907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.key ...
	I1027 22:40:53.868184  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.key: {Name:mk742361a9670894b2da88022d7c0a09ae9546b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:53.868413  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:40:53.868453  764907 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:40:53.868467  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:40:53.868497  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:40:53.868521  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:40:53.868548  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:40:53.868599  764907 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:53.869171  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:40:53.889070  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:40:53.909429  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:40:53.930564  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:40:53.949254  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 22:40:53.969006  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:40:53.989244  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:40:54.008096  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kindnet-293335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:40:54.029180  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:40:54.052612  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:40:54.072953  764907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:40:54.092106  764907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:40:54.106701  764907 ssh_runner.go:195] Run: openssl version
	I1027 22:40:54.113393  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:40:54.122899  764907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:54.126935  764907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:54.127000  764907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:54.164632  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:40:54.173835  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:40:54.184434  764907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:40:54.192984  764907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:40:54.193074  764907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:40:54.247930  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:40:54.257906  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:40:54.267831  764907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:40:54.271852  764907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:40:54.271914  764907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:40:54.333361  764907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:40:54.342635  764907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:40:54.346503  764907 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:40:54.346562  764907 kubeadm.go:401] StartCluster: {Name:kindnet-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:54.346636  764907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:40:54.346690  764907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:40:54.378097  764907 cri.go:89] found id: ""
	I1027 22:40:54.378178  764907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:40:54.387988  764907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:40:54.396257  764907 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:40:54.396316  764907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:40:54.405014  764907 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:40:54.405036  764907 kubeadm.go:158] found existing configuration files:
	
	I1027 22:40:54.405078  764907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:40:54.412982  764907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:40:54.413037  764907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:40:54.420999  764907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:40:54.428872  764907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:40:54.428930  764907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:40:54.437438  764907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:40:54.447248  764907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:40:54.447310  764907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:40:54.461700  764907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:40:54.471703  764907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:40:54.471770  764907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:40:54.480229  764907 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:40:54.548278  764907 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 22:40:54.620849  764907 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:40:57.291849  766237 cli_runner.go:164] Run: docker network inspect calico-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:40:57.309524  766237 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 22:40:57.313748  766237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:57.324782  766237 kubeadm.go:884] updating cluster {Name:calico-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:40:57.324939  766237 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:40:57.325034  766237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:57.361809  766237 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:57.361833  766237 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:40:57.361887  766237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:40:57.391558  766237 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:40:57.391577  766237 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:40:57.391585  766237 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 22:40:57.391680  766237 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-293335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1027 22:40:57.391743  766237 ssh_runner.go:195] Run: crio config
	I1027 22:40:57.459773  766237 cni.go:84] Creating CNI manager for "calico"
	I1027 22:40:57.459803  766237 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:40:57.459824  766237 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-293335 NodeName:calico-293335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:40:57.459934  766237 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-293335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:40:57.460013  766237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:40:57.471187  766237 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:40:57.471258  766237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:40:57.480735  766237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1027 22:40:57.497542  766237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:40:57.516915  766237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1027 22:40:57.530417  766237 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:40:57.534898  766237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:40:57.546283  766237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:40:57.652241  766237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:40:57.674264  766237 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335 for IP: 192.168.76.2
	I1027 22:40:57.674285  766237 certs.go:195] generating shared ca certs ...
	I1027 22:40:57.674306  766237 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:57.674490  766237 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:40:57.674550  766237 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:40:57.674563  766237 certs.go:257] generating profile certs ...
	I1027 22:40:57.674640  766237 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.key
	I1027 22:40:57.674655  766237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.crt with IP's: []
	I1027 22:40:57.810865  766237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.crt ...
	I1027 22:40:57.810893  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.crt: {Name:mk629e57e640b2d978cc7e13e15f6398293dfeea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:57.811151  766237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.key ...
	I1027 22:40:57.811181  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/client.key: {Name:mk2a201a94556c2d8f3f8e188277f1d484d58800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:57.811317  766237 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key.897c17e8
	I1027 22:40:57.811341  766237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt.897c17e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 22:40:58.163850  766237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt.897c17e8 ...
	I1027 22:40:58.163916  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt.897c17e8: {Name:mkf2dbf16899bc5e31429a004da738ca0eecd618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:58.164152  766237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key.897c17e8 ...
	I1027 22:40:58.164221  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key.897c17e8: {Name:mkc75ce064a0a25d6f8f99ff3ef6715f417c64f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:58.164397  766237 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt.897c17e8 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt
	I1027 22:40:58.164531  766237 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key.897c17e8 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key
	I1027 22:40:58.164639  766237 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.key
	I1027 22:40:58.164680  766237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.crt with IP's: []
	I1027 22:40:58.716536  766237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.crt ...
	I1027 22:40:58.716564  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.crt: {Name:mkd20b58ab16cc7fcddd160ed6065699ca0a847c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:58.716744  766237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.key ...
	I1027 22:40:58.716756  766237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.key: {Name:mk7ed32c95e75a8c4ed6a2b273255eb2129c50d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:40:58.716929  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:40:58.716986  766237 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:40:58.717001  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:40:58.717029  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:40:58.717054  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:40:58.717075  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:40:58.717124  766237 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:40:58.717699  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:40:58.736363  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:40:58.753781  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:40:58.770568  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:40:58.787800  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 22:40:58.805810  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:40:58.824258  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:40:58.843071  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/calico-293335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:40:58.861187  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:40:58.880304  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:40:58.897432  766237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:40:58.915417  766237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:40:58.928692  766237 ssh_runner.go:195] Run: openssl version
	I1027 22:40:58.935482  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:40:58.944753  766237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:58.949220  766237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:58.949288  766237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:40:58.986673  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:40:58.995832  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:40:59.004727  766237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:40:59.008427  766237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:40:59.008484  766237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:40:59.043062  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:40:59.052259  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:40:59.060637  766237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:40:59.064417  766237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:40:59.064479  766237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:40:59.099738  766237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:40:59.109048  766237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:40:59.112950  766237 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:40:59.113019  766237 kubeadm.go:401] StartCluster: {Name:calico-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:40:59.113095  766237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:40:59.113139  766237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:40:59.142926  766237 cri.go:89] found id: ""
	I1027 22:40:59.143015  766237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:40:59.151697  766237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:40:59.160335  766237 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:40:59.160409  766237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:40:59.169171  766237 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:40:59.169189  766237 kubeadm.go:158] found existing configuration files:
	
	I1027 22:40:59.169256  766237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:40:59.180395  766237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:40:59.180571  766237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:40:59.191609  766237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:40:59.200812  766237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:40:59.200885  766237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:40:59.210313  766237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:40:59.220029  766237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:40:59.220112  766237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:40:59.228175  766237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:40:59.237010  766237 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:40:59.237078  766237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:40:59.245045  766237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:40:59.286080  766237 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:40:59.286157  766237 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:40:59.309426  766237 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:40:59.309527  766237 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:40:59.309579  766237 kubeadm.go:319] OS: Linux
	I1027 22:40:59.309662  766237 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:40:59.309753  766237 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:40:59.309827  766237 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:40:59.309897  766237 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:40:59.309980  766237 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:40:59.310054  766237 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:40:59.310129  766237 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:40:59.310192  766237 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:40:59.370851  766237 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:40:59.370996  766237 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:40:59.371123  766237 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:40:59.379315  766237 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:40:59.381157  766237 out.go:252]   - Generating certificates and keys ...
	I1027 22:40:59.381258  766237 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:40:59.381349  766237 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:40:59.471724  766237 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:40:59.920277  766237 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:40:59.969156  766237 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:40:57.090281  769174 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-293335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.508476726s)
	I1027 22:40:57.090308  769174 kic.go:203] duration metric: took 4.508627655s to extract preloaded images to volume ...
	W1027 22:40:57.090401  769174 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 22:40:57.090444  769174 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 22:40:57.090501  769174 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:40:57.155616  769174 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-293335 --name custom-flannel-293335 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-293335 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-293335 --network custom-flannel-293335 --ip 192.168.94.2 --volume custom-flannel-293335:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:40:57.444277  769174 cli_runner.go:164] Run: docker container inspect custom-flannel-293335 --format={{.State.Running}}
	I1027 22:40:57.464933  769174 cli_runner.go:164] Run: docker container inspect custom-flannel-293335 --format={{.State.Status}}
	I1027 22:40:57.486157  769174 cli_runner.go:164] Run: docker exec custom-flannel-293335 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:40:57.534261  769174 oci.go:144] the created container "custom-flannel-293335" has a running status.
	I1027 22:40:57.534293  769174 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa...
	I1027 22:40:57.568573  769174 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:40:57.601245  769174 cli_runner.go:164] Run: docker container inspect custom-flannel-293335 --format={{.State.Status}}
	I1027 22:40:57.627483  769174 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:40:57.627512  769174 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-293335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:40:57.682602  769174 cli_runner.go:164] Run: docker container inspect custom-flannel-293335 --format={{.State.Status}}
	I1027 22:40:57.706361  769174 machine.go:94] provisionDockerMachine start ...
	I1027 22:40:57.706466  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:40:57.730720  769174 main.go:143] libmachine: Using SSH client type: native
	I1027 22:40:57.731121  769174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1027 22:40:57.731246  769174 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:40:57.731978  769174 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35428->127.0.0.1:33118: read: connection reset by peer
	I1027 22:41:00.897267  769174 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-293335
	
	I1027 22:41:00.897301  769174 ubuntu.go:182] provisioning hostname "custom-flannel-293335"
	I1027 22:41:00.897368  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:00.923192  769174 main.go:143] libmachine: Using SSH client type: native
	I1027 22:41:00.923413  769174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1027 22:41:00.923426  769174 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-293335 && echo "custom-flannel-293335" | sudo tee /etc/hostname
	I1027 22:41:01.081500  769174 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-293335
	
	I1027 22:41:01.081589  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:01.103293  769174 main.go:143] libmachine: Using SSH client type: native
	I1027 22:41:01.103582  769174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1027 22:41:01.103602  769174 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-293335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-293335/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-293335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:41:01.249302  769174 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:41:01.249346  769174 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-482142/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-482142/.minikube}
	I1027 22:41:01.249378  769174 ubuntu.go:190] setting up certificates
	I1027 22:41:01.249396  769174 provision.go:84] configureAuth start
	I1027 22:41:01.249467  769174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-293335
	I1027 22:41:01.269271  769174 provision.go:143] copyHostCerts
	I1027 22:41:01.269337  769174 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem, removing ...
	I1027 22:41:01.269352  769174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem
	I1027 22:41:01.269442  769174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/ca.pem (1078 bytes)
	I1027 22:41:01.269570  769174 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem, removing ...
	I1027 22:41:01.269580  769174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem
	I1027 22:41:01.269617  769174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/cert.pem (1123 bytes)
	I1027 22:41:01.269704  769174 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem, removing ...
	I1027 22:41:01.269713  769174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem
	I1027 22:41:01.269744  769174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-482142/.minikube/key.pem (1679 bytes)
	I1027 22:41:01.269825  769174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-293335 san=[127.0.0.1 192.168.94.2 custom-flannel-293335 localhost minikube]
	I1027 22:41:01.737659  769174 provision.go:177] copyRemoteCerts
	I1027 22:41:01.737728  769174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:41:01.737791  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:01.760868  769174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa Username:docker}
	I1027 22:41:01.869920  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:41:01.893759  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 22:41:01.917866  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1027 22:41:01.939251  769174 provision.go:87] duration metric: took 689.835044ms to configureAuth
	I1027 22:41:01.939284  769174 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:41:01.939494  769174 config.go:182] Loaded profile config "custom-flannel-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:41:01.939612  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:01.959285  769174 main.go:143] libmachine: Using SSH client type: native
	I1027 22:41:01.959634  769174 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1027 22:41:01.959662  769174 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:41:02.285644  769174 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:41:02.285676  769174 machine.go:97] duration metric: took 4.579290946s to provisionDockerMachine
	I1027 22:41:02.285690  769174 client.go:176] duration metric: took 10.352400611s to LocalClient.Create
	I1027 22:41:02.285718  769174 start.go:167] duration metric: took 10.352486469s to libmachine.API.Create "custom-flannel-293335"
	I1027 22:41:02.285729  769174 start.go:293] postStartSetup for "custom-flannel-293335" (driver="docker")
	I1027 22:41:02.285747  769174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:41:02.285829  769174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:41:02.285897  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:02.306389  769174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa Username:docker}
	I1027 22:41:02.425456  769174 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:41:02.429252  769174 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:41:02.429281  769174 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:41:02.429292  769174 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/addons for local assets ...
	I1027 22:41:02.429411  769174 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-482142/.minikube/files for local assets ...
	I1027 22:41:02.429515  769174 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem -> 4856682.pem in /etc/ssl/certs
	I1027 22:41:02.429638  769174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 22:41:02.437650  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:41:02.458607  769174 start.go:296] duration metric: took 172.856806ms for postStartSetup
	I1027 22:41:02.459004  769174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-293335
	I1027 22:41:02.486822  769174 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/config.json ...
	I1027 22:41:02.487191  769174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:41:02.487247  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:02.510754  769174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa Username:docker}
	I1027 22:41:02.615681  769174 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:41:02.621194  769174 start.go:128] duration metric: took 10.690821914s to createHost
	I1027 22:41:02.621221  769174 start.go:83] releasing machines lock for "custom-flannel-293335", held for 10.690983139s
	I1027 22:41:02.621307  769174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-293335
	I1027 22:41:02.643258  769174 ssh_runner.go:195] Run: cat /version.json
	I1027 22:41:02.643334  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:02.643351  769174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:41:02.643428  769174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-293335
	I1027 22:41:02.665468  769174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa Username:docker}
	I1027 22:41:02.666006  769174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/custom-flannel-293335/id_rsa Username:docker}
	I1027 22:41:02.822349  769174 ssh_runner.go:195] Run: systemctl --version
	I1027 22:41:02.829326  769174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:41:02.866157  769174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:41:02.871295  769174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:41:02.871376  769174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:41:02.896721  769174 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 22:41:02.896747  769174 start.go:496] detecting cgroup driver to use...
	I1027 22:41:02.896778  769174 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 22:41:02.896840  769174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:41:02.919291  769174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:41:02.933526  769174 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:41:02.933577  769174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:41:02.952734  769174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:41:02.972547  769174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:41:03.057051  769174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:41:03.147312  769174 docker.go:234] disabling docker service ...
	I1027 22:41:03.147393  769174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:41:03.167432  769174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:41:03.180895  769174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:41:03.269331  769174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:41:03.363726  769174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:41:03.379511  769174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:41:03.397598  769174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:41:03.397662  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.409720  769174 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 22:41:03.409803  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.421351  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.433007  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.443855  769174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:41:03.454036  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.465173  769174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.482368  769174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:41:03.493715  769174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:41:03.503082  769174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:41:03.512231  769174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:41:03.612076  769174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:41:03.748140  769174 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:41:03.748212  769174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:41:03.753601  769174 start.go:564] Will wait 60s for crictl version
	I1027 22:41:03.753673  769174 ssh_runner.go:195] Run: which crictl
	I1027 22:41:03.758152  769174 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:41:03.789412  769174 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:41:03.789510  769174 ssh_runner.go:195] Run: crio --version
	I1027 22:41:03.824915  769174 ssh_runner.go:195] Run: crio --version
	I1027 22:41:03.862290  769174 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:41:00.129184  766237 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:41:00.289522  766237 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:41:00.289683  766237 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-293335 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 22:41:00.933672  766237 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:41:00.933919  766237 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-293335 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 22:41:01.191005  766237 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:41:01.725995  766237 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:41:02.192277  766237 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:41:02.192445  766237 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:41:02.442182  766237 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:41:02.972371  766237 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:41:03.414492  766237 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:41:03.851564  766237 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:41:04.114526  766237 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:41:04.114983  766237 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:41:04.118435  766237 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:41:04.119885  766237 out.go:252]   - Booting up control plane ...
	I1027 22:41:04.120021  766237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:41:04.120119  766237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:41:04.121812  766237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:41:04.138479  766237 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:41:04.138682  766237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:41:04.145096  766237 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:41:04.145374  766237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:41:04.145461  766237 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:41:04.254165  766237 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:41:04.254352  766237 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:41:04.755805  766237 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.795781ms
	I1027 22:41:04.760374  766237 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:41:04.760506  766237 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 22:41:04.760688  766237 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:41:04.760829  766237 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:41:05.680318  764907 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:41:05.680402  764907 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:41:05.680523  764907 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:41:05.680591  764907 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:41:05.680631  764907 kubeadm.go:319] OS: Linux
	I1027 22:41:05.680687  764907 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:41:05.680748  764907 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:41:05.680808  764907 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:41:05.680866  764907 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:41:05.680925  764907 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:41:05.680995  764907 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:41:05.681045  764907 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:41:05.681088  764907 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:41:05.681165  764907 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:41:05.681272  764907 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:41:05.681370  764907 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:41:05.681460  764907 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:41:05.683109  764907 out.go:252]   - Generating certificates and keys ...
	I1027 22:41:05.683284  764907 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:41:05.683462  764907 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:41:05.683561  764907 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:41:05.683633  764907 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:41:05.683708  764907 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:41:05.683770  764907 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:41:05.683832  764907 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:41:05.683980  764907 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-293335 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 22:41:05.684050  764907 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:41:05.684208  764907 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-293335 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 22:41:05.684295  764907 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:41:05.684368  764907 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:41:05.684428  764907 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:41:05.684503  764907 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:41:05.684563  764907 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:41:05.684624  764907 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:41:05.684681  764907 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:41:05.684762  764907 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:41:05.684827  764907 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:41:05.685010  764907 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:41:05.685161  764907 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:41:05.687327  764907 out.go:252]   - Booting up control plane ...
	I1027 22:41:05.687532  764907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:41:05.687794  764907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:41:05.687882  764907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:41:05.688055  764907 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:41:05.688176  764907 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:41:05.688419  764907 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:41:05.688718  764907 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:41:05.688904  764907 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:41:05.689239  764907 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:41:05.689497  764907 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:41:05.689647  764907 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001146285s
	I1027 22:41:05.690016  764907 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:41:05.690176  764907 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1027 22:41:05.690351  764907 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:41:05.690463  764907 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:41:05.690553  764907 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.129331634s
	I1027 22:41:05.690649  764907 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.232695742s
	I1027 22:41:05.690738  764907 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001408494s
	I1027 22:41:05.690877  764907 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:41:05.691034  764907 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:41:05.691115  764907 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:41:05.691425  764907 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-293335 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:41:05.691516  764907 kubeadm.go:319] [bootstrap-token] Using token: 529thl.08hybtrxaqjgjt94
	I1027 22:41:05.692906  764907 out.go:252]   - Configuring RBAC rules ...
	I1027 22:41:05.693089  764907 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:41:05.693268  764907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:41:05.693494  764907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:41:05.693655  764907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:41:05.693835  764907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:41:05.694000  764907 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:41:05.694185  764907 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:41:05.694259  764907 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:41:05.694325  764907 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:41:05.694333  764907 kubeadm.go:319] 
	I1027 22:41:05.694418  764907 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:41:05.694427  764907 kubeadm.go:319] 
	I1027 22:41:05.694526  764907 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:41:05.694534  764907 kubeadm.go:319] 
	I1027 22:41:05.694568  764907 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:41:05.694677  764907 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:41:05.694763  764907 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:41:05.694776  764907 kubeadm.go:319] 
	I1027 22:41:05.694855  764907 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:41:05.694865  764907 kubeadm.go:319] 
	I1027 22:41:05.694938  764907 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:41:05.694962  764907 kubeadm.go:319] 
	I1027 22:41:05.695036  764907 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:41:05.695151  764907 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:41:05.695245  764907 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:41:05.695258  764907 kubeadm.go:319] 
	I1027 22:41:05.695362  764907 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:41:05.695471  764907 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:41:05.695481  764907 kubeadm.go:319] 
	I1027 22:41:05.695586  764907 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 529thl.08hybtrxaqjgjt94 \
	I1027 22:41:05.695711  764907 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d \
	I1027 22:41:05.695740  764907 kubeadm.go:319] 	--control-plane 
	I1027 22:41:05.695745  764907 kubeadm.go:319] 
	I1027 22:41:05.695847  764907 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:41:05.695856  764907 kubeadm.go:319] 
	I1027 22:41:05.695968  764907 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 529thl.08hybtrxaqjgjt94 \
	I1027 22:41:05.696111  764907 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d 
	I1027 22:41:05.696128  764907 cni.go:84] Creating CNI manager for "kindnet"
	I1027 22:41:05.698125  764907 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:41:03.863325  769174 cli_runner.go:164] Run: docker network inspect custom-flannel-293335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:41:03.884107  769174 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 22:41:03.888928  769174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:41:03.900357  769174 kubeadm.go:884] updating cluster {Name:custom-flannel-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-293335 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:41:03.900494  769174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:41:03.900544  769174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:41:03.940214  769174 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:41:03.940247  769174 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:41:03.940310  769174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:41:03.971270  769174 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:41:03.971293  769174 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:41:03.971304  769174 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 22:41:03.971417  769174 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-293335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1027 22:41:03.971501  769174 ssh_runner.go:195] Run: crio config
	I1027 22:41:04.021190  769174 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1027 22:41:04.021235  769174 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:41:04.021257  769174 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-293335 NodeName:custom-flannel-293335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:41:04.021381  769174 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-293335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:41:04.021434  769174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:41:04.030545  769174 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:41:04.030616  769174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:41:04.039138  769174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1027 22:41:04.051976  769174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:41:04.066765  769174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1027 22:41:04.079719  769174 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:41:04.083739  769174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:41:04.094127  769174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:41:04.188546  769174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:41:04.213818  769174 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335 for IP: 192.168.94.2
	I1027 22:41:04.213842  769174 certs.go:195] generating shared ca certs ...
	I1027 22:41:04.213865  769174 certs.go:227] acquiring lock for ca certs: {Name:mkc9e4e6f383cad37901561c6b7aaaa04fe49c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:04.214057  769174 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key
	I1027 22:41:04.214098  769174 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key
	I1027 22:41:04.214109  769174 certs.go:257] generating profile certs ...
	I1027 22:41:04.214163  769174 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.key
	I1027 22:41:04.214177  769174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.crt with IP's: []
	I1027 22:41:04.498919  769174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.crt ...
	I1027 22:41:04.498963  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.crt: {Name:mk3ecb20d0390181b7834facbabeb8a5d05066b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:04.499154  769174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.key ...
	I1027 22:41:04.499174  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/client.key: {Name:mkaea1120ea61308f96b400c93c5b59e919dea82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:04.499281  769174 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key.2e14aaf9
	I1027 22:41:04.499298  769174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt.2e14aaf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1027 22:41:04.795603  769174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt.2e14aaf9 ...
	I1027 22:41:04.795629  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt.2e14aaf9: {Name:mkdbc395cc94a13f41a68386e7b3bca65a674938 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:04.795788  769174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key.2e14aaf9 ...
	I1027 22:41:04.795801  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key.2e14aaf9: {Name:mk3149882b9a9d67741a80bfc99cbac6b9807826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:04.795876  769174 certs.go:382] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt.2e14aaf9 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt
	I1027 22:41:04.795982  769174 certs.go:386] copying /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key.2e14aaf9 -> /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key
	I1027 22:41:04.796069  769174 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.key
	I1027 22:41:04.796088  769174 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.crt with IP's: []
	I1027 22:41:05.474482  769174 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.crt ...
	I1027 22:41:05.474518  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.crt: {Name:mkec66ca4413058c5e161f02688fd59c9cc61a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:05.474737  769174 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.key ...
	I1027 22:41:05.474754  769174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.key: {Name:mk663a8ce12e664e1f681fbe25a3d8c183eccd7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:05.474994  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem (1338 bytes)
	W1027 22:41:05.475048  769174 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668_empty.pem, impossibly tiny 0 bytes
	I1027 22:41:05.475062  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:41:05.475092  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/ca.pem (1078 bytes)
	I1027 22:41:05.475122  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:41:05.475159  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/certs/key.pem (1679 bytes)
	I1027 22:41:05.475213  769174 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem (1708 bytes)
	I1027 22:41:05.475989  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:41:05.495978  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1027 22:41:05.515763  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:41:05.534682  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:41:05.552526  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 22:41:05.571594  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:41:05.591117  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:41:05.610181  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/custom-flannel-293335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:41:05.629582  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/certs/485668.pem --> /usr/share/ca-certificates/485668.pem (1338 bytes)
	I1027 22:41:05.649120  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/ssl/certs/4856682.pem --> /usr/share/ca-certificates/4856682.pem (1708 bytes)
	I1027 22:41:05.667988  769174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:41:05.697529  769174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:41:05.714142  769174 ssh_runner.go:195] Run: openssl version
	I1027 22:41:05.721982  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/485668.pem && ln -fs /usr/share/ca-certificates/485668.pem /etc/ssl/certs/485668.pem"
	I1027 22:41:05.731186  769174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/485668.pem
	I1027 22:41:05.735929  769174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:00 /usr/share/ca-certificates/485668.pem
	I1027 22:41:05.736381  769174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/485668.pem
	I1027 22:41:05.783027  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/485668.pem /etc/ssl/certs/51391683.0"
	I1027 22:41:05.793801  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4856682.pem && ln -fs /usr/share/ca-certificates/4856682.pem /etc/ssl/certs/4856682.pem"
	I1027 22:41:05.805486  769174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4856682.pem
	I1027 22:41:05.810194  769174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:00 /usr/share/ca-certificates/4856682.pem
	I1027 22:41:05.810261  769174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4856682.pem
	I1027 22:41:05.859062  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4856682.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:41:05.869340  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:41:05.879097  769174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:41:05.883567  769174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 21:54 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:41:05.883629  769174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:41:05.931271  769174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:41:05.942113  769174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:41:05.946919  769174 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:41:05.946993  769174 kubeadm.go:401] StartCluster: {Name:custom-flannel-293335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-293335 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:41:05.947104  769174 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:41:05.947181  769174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:41:05.983005  769174 cri.go:89] found id: ""
	I1027 22:41:05.983066  769174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:41:05.994200  769174 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:41:06.007198  769174 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:41:06.007258  769174 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:41:06.019604  769174 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:41:06.019629  769174 kubeadm.go:158] found existing configuration files:
	
	I1027 22:41:06.019679  769174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:41:06.031633  769174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:41:06.031708  769174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:41:06.041065  769174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:41:06.050194  769174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:41:06.050247  769174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:41:06.059729  769174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:41:06.070434  769174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:41:06.070479  769174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:41:06.078452  769174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:41:06.086906  769174 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:41:06.086989  769174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:41:06.096058  769174 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:41:06.143632  769174 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:41:06.143696  769174 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:41:06.171260  769174 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:41:06.171356  769174 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 22:41:06.171449  769174 kubeadm.go:319] OS: Linux
	I1027 22:41:06.171526  769174 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:41:06.171596  769174 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:41:06.171687  769174 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:41:06.171766  769174 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:41:06.171852  769174 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:41:06.171937  769174 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:41:06.172035  769174 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:41:06.172112  769174 kubeadm.go:319] CGROUPS_IO: enabled
	I1027 22:41:06.234043  769174 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:41:06.234208  769174 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:41:06.234328  769174 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:41:06.242281  769174 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:41:06.244762  769174 out.go:252]   - Generating certificates and keys ...
	I1027 22:41:06.244851  769174 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:41:06.244931  769174 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:41:06.505140  769174 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:41:05.699921  764907 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:41:05.705103  764907 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 22:41:05.705140  764907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:41:05.720463  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:41:05.979526  764907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:41:05.979623  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:05.979644  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-293335 minikube.k8s.io/updated_at=2025_10_27T22_41_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=kindnet-293335 minikube.k8s.io/primary=true
	I1027 22:41:06.088826  764907 ops.go:34] apiserver oom_adj: -16
	I1027 22:41:06.088847  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:06.589437  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:07.089088  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:07.588886  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:08.089047  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:05.766067  766237 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005522152s
	I1027 22:41:07.900615  766237 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.140173797s
	I1027 22:41:09.261490  766237 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501034492s
	I1027 22:41:09.273793  766237 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:41:09.283344  766237 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:41:09.291497  766237 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:41:09.291799  766237 kubeadm.go:319] [mark-control-plane] Marking the node calico-293335 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:41:09.299770  766237 kubeadm.go:319] [bootstrap-token] Using token: ae116e.rffapx0bx6ok1lcc
	I1027 22:41:09.301052  766237 out.go:252]   - Configuring RBAC rules ...
	I1027 22:41:09.301194  766237 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:41:09.304078  766237 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:41:09.308975  766237 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:41:09.311317  766237 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:41:09.313488  766237 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:41:09.317191  766237 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:41:09.668677  766237 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:41:07.063313  769174 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:41:07.357833  769174 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:41:07.467125  769174 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:41:07.730995  769174 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:41:07.731180  769174 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-293335 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 22:41:07.972039  769174 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:41:07.972261  769174 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-293335 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 22:41:08.421854  769174 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:41:08.736485  769174 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:41:09.136654  769174 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:41:09.136758  769174 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:41:09.685172  769174 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:41:10.010057  769174 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:41:10.148850  769174 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:41:10.486521  769174 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:41:10.568592  769174 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:41:10.569043  769174 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:41:10.572657  769174 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:41:10.081854  766237 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:41:10.672294  766237 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:41:10.672325  766237 kubeadm.go:319] 
	I1027 22:41:10.672418  766237 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:41:10.672425  766237 kubeadm.go:319] 
	I1027 22:41:10.672551  766237 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:41:10.672578  766237 kubeadm.go:319] 
	I1027 22:41:10.672615  766237 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:41:10.672692  766237 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:41:10.672762  766237 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:41:10.672773  766237 kubeadm.go:319] 
	I1027 22:41:10.672862  766237 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:41:10.672873  766237 kubeadm.go:319] 
	I1027 22:41:10.672924  766237 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:41:10.672962  766237 kubeadm.go:319] 
	I1027 22:41:10.673017  766237 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:41:10.673105  766237 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:41:10.673181  766237 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:41:10.673188  766237 kubeadm.go:319] 
	I1027 22:41:10.673308  766237 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:41:10.673433  766237 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:41:10.673453  766237 kubeadm.go:319] 
	I1027 22:41:10.673566  766237 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ae116e.rffapx0bx6ok1lcc \
	I1027 22:41:10.673715  766237 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d \
	I1027 22:41:10.673744  766237 kubeadm.go:319] 	--control-plane 
	I1027 22:41:10.673750  766237 kubeadm.go:319] 
	I1027 22:41:10.673866  766237 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:41:10.673874  766237 kubeadm.go:319] 
	I1027 22:41:10.674002  766237 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ae116e.rffapx0bx6ok1lcc \
	I1027 22:41:10.674155  766237 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c10d1bb830cd806add08a896ba151b0adcb387d9ad957a4283d3d561af4e1b1d 
	I1027 22:41:10.679802  766237 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 22:41:10.679970  766237 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:41:10.680011  766237 cni.go:84] Creating CNI manager for "calico"
	I1027 22:41:10.681495  766237 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1027 22:41:08.588936  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:09.089816  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:09.589082  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:10.089844  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:10.588937  764907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:41:10.694895  764907 kubeadm.go:1114] duration metric: took 4.715364225s to wait for elevateKubeSystemPrivileges
	I1027 22:41:10.694927  764907 kubeadm.go:403] duration metric: took 16.348371447s to StartCluster
	I1027 22:41:10.694962  764907 settings.go:142] acquiring lock: {Name:mkb3bc20f86f7938bda0571f406f1866b0bf7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:10.695051  764907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:41:10.696039  764907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-482142/kubeconfig: {Name:mk3e21cb1b14e0445a123fdb491eea73da31f73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:41:10.696285  764907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 22:41:10.696295  764907 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:41:10.696365  764907 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:41:10.696488  764907 addons.go:69] Setting storage-provisioner=true in profile "kindnet-293335"
	I1027 22:41:10.696513  764907 addons.go:238] Setting addon storage-provisioner=true in "kindnet-293335"
	I1027 22:41:10.696538  764907 config.go:182] Loaded profile config "kindnet-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:41:10.696551  764907 host.go:66] Checking if "kindnet-293335" exists ...
	I1027 22:41:10.696532  764907 addons.go:69] Setting default-storageclass=true in profile "kindnet-293335"
	I1027 22:41:10.696575  764907 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-293335"
	I1027 22:41:10.697010  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:41:10.697041  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:41:10.697981  764907 out.go:179] * Verifying Kubernetes components...
	I1027 22:41:10.699191  764907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:41:10.725465  764907 addons.go:238] Setting addon default-storageclass=true in "kindnet-293335"
	I1027 22:41:10.725518  764907 host.go:66] Checking if "kindnet-293335" exists ...
	I1027 22:41:10.726033  764907 cli_runner.go:164] Run: docker container inspect kindnet-293335 --format={{.State.Status}}
	I1027 22:41:10.727306  764907 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:41:10.728331  764907 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:41:10.728402  764907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:41:10.728500  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:41:10.763720  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:41:10.765809  764907 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:41:10.765834  764907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:41:10.765936  764907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-293335
	I1027 22:41:10.793766  764907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/kindnet-293335/id_rsa Username:docker}
	I1027 22:41:10.831675  764907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 22:41:10.883189  764907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:41:10.920886  764907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:41:10.933622  764907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:41:11.140816  764907 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 22:41:11.142961  764907 node_ready.go:35] waiting up to 15m0s for node "kindnet-293335" to be "Ready" ...
	I1027 22:41:11.381413  764907 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 22:41:10.573874  769174 out.go:252]   - Booting up control plane ...
	I1027 22:41:10.573992  769174 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:41:10.574094  769174 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:41:10.574676  769174 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:41:10.589360  769174 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:41:10.589487  769174 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:41:10.597649  769174 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:41:10.598001  769174 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:41:10.598049  769174 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:41:10.766579  769174 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:41:10.767597  769174 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:41:11.382290  764907 addons.go:514] duration metric: took 685.922764ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 22:41:11.646931  764907 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-293335" context rescaled to 1 replicas
	W1027 22:41:13.146285  764907 node_ready.go:57] node "kindnet-293335" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.721626226Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.721651984Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.721669876Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.725209321Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.725233886Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.725251401Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.728922125Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.728974726Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.729000761Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.732559482Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.732581249Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.732604624Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.73613406Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 22:40:27 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:27.736158378Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.913569453Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cc2ab518-5182-4f9c-9b36-ba312ddaaa56 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.914462268Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8f563b0e-8982-4fc0-b7a1-099410b98941 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.91555641Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p/dashboard-metrics-scraper" id=f562db44-6138-4016-ba75-0ccdcd8d938d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.915699376Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.922007936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.922757571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.950441439Z" level=info msg="Created container 78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p/dashboard-metrics-scraper" id=f562db44-6138-4016-ba75-0ccdcd8d938d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.95105312Z" level=info msg="Starting container: 78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d" id=b6ce24a8-5f3e-47d9-ac81-0177483be4b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:40:43 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:43.952855331Z" level=info msg="Started container" PID=1768 containerID=78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p/dashboard-metrics-scraper id=b6ce24a8-5f3e-47d9-ac81-0177483be4b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6130e7efe60c9d745e9003841f305b90f3fb99dd8dd93aef34c48359307f896c
	Oct 27 22:40:44 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:44.032559592Z" level=info msg="Removing container: 6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0" id=e1f5d9bf-d21b-466f-956d-e16dd41f06dd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:40:44 default-k8s-diff-port-927034 crio[566]: time="2025-10-27T22:40:44.046732627Z" level=info msg="Removed container 6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p/dashboard-metrics-scraper" id=e1f5d9bf-d21b-466f-956d-e16dd41f06dd name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	78283aebd6f86       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   6130e7efe60c9       dashboard-metrics-scraper-6ffb444bf9-6x67p             kubernetes-dashboard
	943e0d285e380       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago       Running             kubernetes-dashboard        0                   c8910a047e8c5       kubernetes-dashboard-855c9754f9-s2lwd                  kubernetes-dashboard
	827e84f1fab22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Running             storage-provisioner         1                   df59a2d0ff396       storage-provisioner                                    kube-system
	b612efeb97942       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   b3366362daa15       busybox                                                default
	dd925db2f94fb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     0                   a9d2f8707ce3e       coredns-66bc5c9577-bvr8f                               kube-system
	941141ecdf554       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   d14f524395552       kindnet-94cw9                                          kube-system
	dddf4daea9020       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   df59a2d0ff396       storage-provisioner                                    kube-system
	ababe86c36b42       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           58 seconds ago       Running             kube-proxy                  0                   0c8569ca3e78c       kube-proxy-42dj4                                       kube-system
	9cda36d13a021       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   947444da57827       etcd-default-k8s-diff-port-927034                      kube-system
	a73ac42016306       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   51c42ac5032a0       kube-scheduler-default-k8s-diff-port-927034            kube-system
	341e84318f679       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   d66f84b438e81       kube-apiserver-default-k8s-diff-port-927034            kube-system
	844da32e0557f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   00ab06f81f16f       kube-controller-manager-default-k8s-diff-port-927034   kube-system
	
	
	==> coredns [dd925db2f94fb591e9c7cb190ecb837b75758b86b30152040595a82ecd10fac3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47352 - 41434 "HINFO IN 5411424138599910356.9208066112809769200. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02854589s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-927034
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-927034
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=default-k8s-diff-port-927034
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_39_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:39:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-927034
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:41:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:40:57 +0000   Mon, 27 Oct 2025 22:39:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:40:57 +0000   Mon, 27 Oct 2025 22:39:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:40:57 +0000   Mon, 27 Oct 2025 22:39:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:40:57 +0000   Mon, 27 Oct 2025 22:40:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-927034
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                bea60602-4e46-4583-a378-a857a2ae88ea
	  Boot ID:                    c0303041-e5e2-482c-a249-f6a4f1c37819
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-bvr8f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-default-k8s-diff-port-927034                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-94cw9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-default-k8s-diff-port-927034             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-927034    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-42dj4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-default-k8s-diff-port-927034             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6x67p              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s2lwd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 113s                 kube-proxy       
	  Normal  Starting                 58s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           115s                 node-controller  Node default-k8s-diff-port-927034 event: Registered Node default-k8s-diff-port-927034 in Controller
	  Normal  NodeReady                102s                 kubelet          Node default-k8s-diff-port-927034 status is now: NodeReady
	  Normal  Starting                 62s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 62s)    kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 62s)    kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 62s)    kubelet          Node default-k8s-diff-port-927034 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                  node-controller  Node default-k8s-diff-port-927034 event: Registered Node default-k8s-diff-port-927034 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 8f 78 32 70 d6 08 06
	[ +21.581069] IPv4: martian source 10.244.0.1 from 10.244.0.208, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 07 69 58 b5 8c 08 06
	[Oct27 21:56] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.048074] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.023980] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.024865] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +1.022982] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +2.047832] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +4.031696] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[  +8.511498] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[ +16.382890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000017] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	[Oct27 21:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: f6 a2 07 ad 26 e3 72 19 47 b1 c2 e6 08 00
	
	
	==> etcd [9cda36d13a02141502e61a8f0bd69b14fb79ac20826af4e9365b17402d4e4467] <==
	{"level":"warn","ts":"2025-10-27T22:40:15.442779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.452677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.460455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.470090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.477574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.484614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.491853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.498712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.506034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.516452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.524734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.532345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.539334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.557292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.560990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.568004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.575710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:40:15.621578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60688","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T22:40:47.442176Z","caller":"traceutil/trace.go:172","msg":"trace[1546843057] transaction","detail":"{read_only:false; response_revision:662; number_of_response:1; }","duration":"130.393ms","start":"2025-10-27T22:40:47.311755Z","end":"2025-10-27T22:40:47.442148Z","steps":["trace[1546843057] 'process raft request'  (duration: 63.321958ms)","trace[1546843057] 'compare'  (duration: 66.951846ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:40:56.552441Z","caller":"traceutil/trace.go:172","msg":"trace[685820210] transaction","detail":"{read_only:false; response_revision:671; number_of_response:1; }","duration":"157.012786ms","start":"2025-10-27T22:40:56.395408Z","end":"2025-10-27T22:40:56.552421Z","steps":["trace[685820210] 'process raft request'  (duration: 128.686703ms)","trace[685820210] 'compare'  (duration: 28.218527ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:40:56.555508Z","caller":"traceutil/trace.go:172","msg":"trace[1557955267] transaction","detail":"{read_only:false; response_revision:673; number_of_response:1; }","duration":"158.850601ms","start":"2025-10-27T22:40:56.396643Z","end":"2025-10-27T22:40:56.555493Z","steps":["trace[1557955267] 'process raft request'  (duration: 158.791802ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T22:40:56.555511Z","caller":"traceutil/trace.go:172","msg":"trace[1132699416] transaction","detail":"{read_only:false; response_revision:672; number_of_response:1; }","duration":"159.977305ms","start":"2025-10-27T22:40:56.395522Z","end":"2025-10-27T22:40:56.555499Z","steps":["trace[1132699416] 'process raft request'  (duration: 159.815947ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T22:40:56.736980Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.496903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T22:40:56.737158Z","caller":"traceutil/trace.go:172","msg":"trace[2005717014] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:673; }","duration":"122.723902ms","start":"2025-10-27T22:40:56.614413Z","end":"2025-10-27T22:40:56.737137Z","steps":["trace[2005717014] 'agreement among raft nodes before linearized reading'  (duration: 80.647457ms)","trace[2005717014] 'range keys from in-memory index tree'  (duration: 41.821969ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T22:40:56.737482Z","caller":"traceutil/trace.go:172","msg":"trace[1402834480] transaction","detail":"{read_only:false; response_revision:674; number_of_response:1; }","duration":"175.407473ms","start":"2025-10-27T22:40:56.562048Z","end":"2025-10-27T22:40:56.737456Z","steps":["trace[1402834480] 'process raft request'  (duration: 133.067879ms)","trace[1402834480] 'compare'  (duration: 41.823297ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:41:16 up  2:23,  0 user,  load average: 4.90, 3.49, 3.03
	Linux default-k8s-diff-port-927034 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [941141ecdf5542a303eff7ec706390c2f855de75447f8261b3667f38a2495d01] <==
	I1027 22:40:17.517739       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:40:17.518009       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1027 22:40:17.518146       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:40:17.518164       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:40:17.518174       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:40:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:40:17.717288       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:40:17.816402       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:40:17.816562       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:40:17.816810       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 22:40:18.216995       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:40:18.217026       1 metrics.go:72] Registering metrics
	I1027 22:40:18.217095       1 controller.go:711] "Syncing nftables rules"
	I1027 22:40:27.717078       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:40:27.717137       1 main.go:301] handling current node
	I1027 22:40:37.723207       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:40:37.723236       1 main.go:301] handling current node
	I1027 22:40:47.717057       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:40:47.717100       1 main.go:301] handling current node
	I1027 22:40:57.718221       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:40:57.718259       1 main.go:301] handling current node
	I1027 22:41:07.726030       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1027 22:41:07.726068       1 main.go:301] handling current node
	
	
	==> kube-apiserver [341e84318f679f97a704241f45d9cfde3d9e2e8695ec44c4ff77dcb1b0fb2385] <==
	I1027 22:40:16.128715       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 22:40:16.128723       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:40:16.128729       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:40:16.137453       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:40:16.137500       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 22:40:16.137542       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 22:40:16.137658       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 22:40:16.137694       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 22:40:16.145175       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:40:16.178724       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 22:40:16.190903       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 22:40:16.190938       1 policy_source.go:240] refreshing policies
	I1027 22:40:16.196629       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 22:40:16.226472       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 22:40:16.423007       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 22:40:16.455836       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:40:16.476090       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:40:16.482444       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:40:16.489744       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:40:16.523823       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.45.67"}
	I1027 22:40:16.534123       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.37.143"}
	I1027 22:40:17.030253       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:40:19.267876       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:40:19.366617       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:40:19.417242       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [844da32e0557faa56becf52073bd2e1d4107c6dcd6a6994bf7b807ec687a20df] <==
	I1027 22:40:18.824056       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:40:18.826660       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 22:40:18.827903       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:40:18.829926       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:40:18.848414       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 22:40:18.850643       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:40:18.852880       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 22:40:18.855133       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 22:40:18.863795       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 22:40:18.863888       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 22:40:18.863918       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 22:40:18.863980       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 22:40:18.864085       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 22:40:18.864152       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:40:18.864153       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 22:40:18.864269       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:40:18.864936       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 22:40:18.867119       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 22:40:18.869357       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:40:18.869363       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 22:40:18.871622       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 22:40:18.873822       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:40:18.876067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 22:40:18.878316       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 22:40:18.888638       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ababe86c36b425bd0273434f7b483138971716fbdf50f44c100e55918006dcfb] <==
	I1027 22:40:17.317309       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:40:17.379061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:40:17.479279       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:40:17.479335       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1027 22:40:17.479466       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:40:17.500486       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:40:17.500544       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:40:17.506556       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:40:17.506992       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:40:17.507040       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:40:17.508322       1 config.go:200] "Starting service config controller"
	I1027 22:40:17.508405       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:40:17.508415       1 config.go:309] "Starting node config controller"
	I1027 22:40:17.508449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:40:17.508458       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:40:17.508476       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:40:17.508487       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:40:17.508497       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:40:17.508506       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:40:17.609568       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 22:40:17.609608       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:40:17.609562       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a73ac42016306256e53333754b058b687911ab56a58a53efba33e2650ed7f3c4] <==
	I1027 22:40:15.276232       1 serving.go:386] Generated self-signed cert in-memory
	W1027 22:40:16.055355       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:40:16.055402       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:40:16.055419       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:40:16.055428       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:40:16.132799       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:40:16.133457       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:40:16.136588       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:40:16.136676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:40:16.139040       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:40:16.139103       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:40:16.237205       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:40:19 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:19.482410     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a81bcd0c-04cb-409e-aad0-b5a2fa67a094-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-s2lwd\" (UID: \"a81bcd0c-04cb-409e-aad0-b5a2fa67a094\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s2lwd"
	Oct 27 22:40:19 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:19.482435     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x49nf\" (UniqueName: \"kubernetes.io/projected/a81bcd0c-04cb-409e-aad0-b5a2fa67a094-kube-api-access-x49nf\") pod \"kubernetes-dashboard-855c9754f9-s2lwd\" (UID: \"a81bcd0c-04cb-409e-aad0-b5a2fa67a094\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s2lwd"
	Oct 27 22:40:19 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:19.482453     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/37064add-e8da-40e3-9610-90576ff56b3b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-6x67p\" (UID: \"37064add-e8da-40e3-9610-90576ff56b3b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p"
	Oct 27 22:40:22 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:22.961797     724 scope.go:117] "RemoveContainer" containerID="158e3a0428f441cf6d1f1cf2bd69b5b147d55f5f9a74339253a024356b7d9556"
	Oct 27 22:40:23 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:23.966319     724 scope.go:117] "RemoveContainer" containerID="158e3a0428f441cf6d1f1cf2bd69b5b147d55f5f9a74339253a024356b7d9556"
	Oct 27 22:40:23 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:23.966491     724 scope.go:117] "RemoveContainer" containerID="6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0"
	Oct 27 22:40:23 default-k8s-diff-port-927034 kubelet[724]: E1027 22:40:23.966690     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:40:24 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:24.970798     724 scope.go:117] "RemoveContainer" containerID="6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0"
	Oct 27 22:40:24 default-k8s-diff-port-927034 kubelet[724]: E1027 22:40:24.971045     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:40:26 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:26.345531     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 22:40:26 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:26.987459     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s2lwd" podStartSLOduration=1.550663664 podStartE2EDuration="7.987436843s" podCreationTimestamp="2025-10-27 22:40:19 +0000 UTC" firstStartedPulling="2025-10-27 22:40:19.659453813 +0000 UTC m=+5.832836512" lastFinishedPulling="2025-10-27 22:40:26.096227003 +0000 UTC m=+12.269609691" observedRunningTime="2025-10-27 22:40:26.987434735 +0000 UTC m=+13.160817441" watchObservedRunningTime="2025-10-27 22:40:26.987436843 +0000 UTC m=+13.160819549"
	Oct 27 22:40:30 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:30.222432     724 scope.go:117] "RemoveContainer" containerID="6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0"
	Oct 27 22:40:30 default-k8s-diff-port-927034 kubelet[724]: E1027 22:40:30.223081     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:40:43 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:43.913131     724 scope.go:117] "RemoveContainer" containerID="6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0"
	Oct 27 22:40:44 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:44.031077     724 scope.go:117] "RemoveContainer" containerID="6b6ee23faf4df37eb0afd2343b4da8036ff6f3f5045b02c06256a756affa42f0"
	Oct 27 22:40:44 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:44.031365     724 scope.go:117] "RemoveContainer" containerID="78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d"
	Oct 27 22:40:44 default-k8s-diff-port-927034 kubelet[724]: E1027 22:40:44.031598     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:40:50 default-k8s-diff-port-927034 kubelet[724]: I1027 22:40:50.222399     724 scope.go:117] "RemoveContainer" containerID="78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d"
	Oct 27 22:40:50 default-k8s-diff-port-927034 kubelet[724]: E1027 22:40:50.222652     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:41:00 default-k8s-diff-port-927034 kubelet[724]: I1027 22:41:00.912752     724 scope.go:117] "RemoveContainer" containerID="78283aebd6f86e910fff207b25626a03ab341412a3e65aa0b3d42a4319e2f18d"
	Oct 27 22:41:00 default-k8s-diff-port-927034 kubelet[724]: E1027 22:41:00.913860     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6x67p_kubernetes-dashboard(37064add-e8da-40e3-9610-90576ff56b3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6x67p" podUID="37064add-e8da-40e3-9610-90576ff56b3b"
	Oct 27 22:41:10 default-k8s-diff-port-927034 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 22:41:10 default-k8s-diff-port-927034 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 22:41:10 default-k8s-diff-port-927034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 22:41:10 default-k8s-diff-port-927034 systemd[1]: kubelet.service: Consumed 1.812s CPU time.
	
	
	==> kubernetes-dashboard [943e0d285e380306579142f00ea866adbc1a6d3e36fe8de0c8f3a0cfa6d58fda] <==
	2025/10/27 22:40:26 Starting overwatch
	2025/10/27 22:40:26 Using namespace: kubernetes-dashboard
	2025/10/27 22:40:26 Using in-cluster config to connect to apiserver
	2025/10/27 22:40:26 Using secret token for csrf signing
	2025/10/27 22:40:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 22:40:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 22:40:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 22:40:26 Generating JWE encryption key
	2025/10/27 22:40:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 22:40:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 22:40:26 Initializing JWE encryption key from synchronized object
	2025/10/27 22:40:26 Creating in-cluster Sidecar client
	2025/10/27 22:40:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 22:40:26 Serving insecurely on HTTP port: 9090
	2025/10/27 22:40:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [827e84f1fab22b15e97cd49ea5930dc974a7849de6da28521576edd02930da17] <==
	W1027 22:40:51.572225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:53.575707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:53.579687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:55.583638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:55.643160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:57.646226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:57.651416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:59.655080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:40:59.659878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:01.663789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:01.668692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:03.673129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:03.678579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:05.682279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:05.688966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:07.692572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:07.697571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:09.700830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:09.705811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:11.716794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:11.732196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:13.737781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:13.745354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:15.749920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:41:15.759833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dddf4daea9020cf289743053ebca403400a4f7513ff226a3edfb5fc2caf01a72] <==
	I1027 22:40:17.289626       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 22:40:17.291920       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034: exit status 2 (358.335666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-927034 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.01s)
E1027 22:42:21.462266  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/old-k8s-version-908589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:42:24.023700  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/old-k8s-version-908589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:42:29.145434  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/old-k8s-version-908589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (263/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 20.97
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 13.61
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.25
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.16
20 TestDownloadOnlyKic 0.85
21 TestBinaryMirror 2.42
22 TestOffline 57.13
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.3
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.3
27 TestAddons/Setup 126.77
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.45
48 TestAddons/StoppedEnableDisable 16.81
49 TestCertOptions 26.11
50 TestCertExpiration 216.94
52 TestForceSystemdFlag 36.3
53 TestForceSystemdEnv 40.19
58 TestErrorSpam/setup 23.62
59 TestErrorSpam/start 0.71
60 TestErrorSpam/status 0.98
61 TestErrorSpam/pause 6.34
62 TestErrorSpam/unpause 5.72
63 TestErrorSpam/stop 18.08
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 42.08
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.43
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.62
75 TestFunctional/serial/CacheCmd/cache/add_local 1.96
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 40.92
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.19
86 TestFunctional/serial/LogsFileCmd 1.26
87 TestFunctional/serial/InvalidService 4.01
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 10.13
91 TestFunctional/parallel/DryRun 0.41
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 30.33
101 TestFunctional/parallel/SSHCmd 0.55
102 TestFunctional/parallel/CpCmd 1.76
103 TestFunctional/parallel/MySQL 17.02
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.67
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
113 TestFunctional/parallel/License 0.4
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
116 TestFunctional/parallel/ProfileCmd/profile_list 0.45
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
118 TestFunctional/parallel/MountCmd/any-port 7.88
119 TestFunctional/parallel/MountCmd/specific-port 1.95
120 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
121 TestFunctional/parallel/Version/short 0.07
122 TestFunctional/parallel/Version/components 0.59
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.32
128 TestFunctional/parallel/ImageCommands/ImageListShort 1.92
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
132 TestFunctional/parallel/ImageCommands/ImageBuild 4.23
133 TestFunctional/parallel/ImageCommands/Setup 1.98
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
150 TestFunctional/parallel/ServiceCmd/List 1.7
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 121.86
163 TestMultiControlPlane/serial/DeployApp 5.17
164 TestMultiControlPlane/serial/PingHostFromPods 1.06
165 TestMultiControlPlane/serial/AddWorkerNode 27.07
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
168 TestMultiControlPlane/serial/CopyFile 17.46
169 TestMultiControlPlane/serial/StopSecondaryNode 19.34
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.58
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 125.64
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.56
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
176 TestMultiControlPlane/serial/StopCluster 46.61
177 TestMultiControlPlane/serial/RestartCluster 53.75
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
179 TestMultiControlPlane/serial/AddSecondaryNode 37.35
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
185 TestJSONOutput/start/Command 39.63
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.16
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 39.87
211 TestKicCustomNetwork/use_default_bridge_network 22.21
212 TestKicExistingNetwork 24.37
213 TestKicCustomSubnet 26.36
214 TestKicStaticIP 27.37
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 46.39
219 TestMountStart/serial/StartWithMountFirst 5.76
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 5.93
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.68
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.76
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 64.6
231 TestMultiNode/serial/DeployApp2Nodes 4.27
232 TestMultiNode/serial/PingHostFrom2Pods 0.74
233 TestMultiNode/serial/AddNode 26.31
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 9.94
237 TestMultiNode/serial/StopNode 2.27
238 TestMultiNode/serial/StartAfterStop 7.49
239 TestMultiNode/serial/RestartKeepsNodes 80.19
240 TestMultiNode/serial/DeleteNode 5.25
241 TestMultiNode/serial/StopMultiNode 30.28
242 TestMultiNode/serial/RestartMultiNode 25.5
243 TestMultiNode/serial/ValidateNameConflict 23.58
248 TestPreload 128.3
250 TestScheduledStopUnix 97.16
253 TestInsufficientStorage 9.72
254 TestRunningBinaryUpgrade 50.03
256 TestKubernetesUpgrade 304.49
257 TestMissingContainerUpgrade 84.34
259 TestPause/serial/Start 53.79
260 TestStoppedBinaryUpgrade/Setup 3.06
261 TestStoppedBinaryUpgrade/Upgrade 62.89
262 TestPause/serial/SecondStartNoReconfiguration 6.64
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/StartWithK8s 27.54
282 TestNetworkPlugins/group/false 3.84
287 TestStartStop/group/old-k8s-version/serial/FirstStart 58.5
288 TestNoKubernetes/serial/StartWithStopK8s 17.1
289 TestNoKubernetes/serial/Start 8.46
290 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
291 TestNoKubernetes/serial/ProfileList 1.76
292 TestNoKubernetes/serial/Stop 1.27
293 TestNoKubernetes/serial/StartNoArgs 6.63
294 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
296 TestStartStop/group/no-preload/serial/FirstStart 50.99
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.27
299 TestStartStop/group/old-k8s-version/serial/Stop 16.32
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
301 TestStartStop/group/old-k8s-version/serial/SecondStart 45.56
302 TestStartStop/group/no-preload/serial/DeployApp 9.22
304 TestStartStop/group/no-preload/serial/Stop 18.52
306 TestStartStop/group/embed-certs/serial/FirstStart 43.54
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
308 TestStartStop/group/no-preload/serial/SecondStart 46.33
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.09
315 TestStartStop/group/embed-certs/serial/DeployApp 9.25
317 TestStartStop/group/embed-certs/serial/Stop 16.15
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
323 TestStartStop/group/embed-certs/serial/SecondStart 51.23
325 TestStartStop/group/newest-cni/serial/FirstStart 29.69
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
327 TestNetworkPlugins/group/auto/Start 42.3
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.29
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/Stop 18.56
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.11
335 TestNetworkPlugins/group/auto/KubeletFlags 0.38
336 TestNetworkPlugins/group/auto/NetCatPod 9.23
337 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
339 TestStartStop/group/newest-cni/serial/SecondStart 10.99
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
341 TestNetworkPlugins/group/auto/DNS 0.12
342 TestNetworkPlugins/group/auto/Localhost 0.1
343 TestNetworkPlugins/group/auto/HairPin 0.09
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
350 TestNetworkPlugins/group/kindnet/Start 41.98
351 TestNetworkPlugins/group/calico/Start 50.91
352 TestNetworkPlugins/group/custom-flannel/Start 56.32
353 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
357 TestNetworkPlugins/group/enable-default-cni/Start 42.11
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
360 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/DNS 0.11
363 TestNetworkPlugins/group/kindnet/Localhost 0.09
364 TestNetworkPlugins/group/kindnet/HairPin 0.09
365 TestNetworkPlugins/group/calico/KubeletFlags 0.32
366 TestNetworkPlugins/group/calico/NetCatPod 8.22
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.2
369 TestNetworkPlugins/group/calico/DNS 0.14
370 TestNetworkPlugins/group/calico/Localhost 0.1
371 TestNetworkPlugins/group/calico/HairPin 0.1
372 TestNetworkPlugins/group/custom-flannel/DNS 0.16
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
375 TestNetworkPlugins/group/flannel/Start 46.88
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
378 TestNetworkPlugins/group/bridge/Start 37.66
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
384 TestNetworkPlugins/group/bridge/NetCatPod 9.18
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
386 TestNetworkPlugins/group/flannel/NetCatPod 8.16
387 TestNetworkPlugins/group/bridge/DNS 0.11
388 TestNetworkPlugins/group/bridge/Localhost 0.08
389 TestNetworkPlugins/group/bridge/HairPin 0.08
390 TestNetworkPlugins/group/flannel/DNS 0.11
391 TestNetworkPlugins/group/flannel/Localhost 0.09
392 TestNetworkPlugins/group/flannel/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (20.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-503153 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-503153 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (20.971290313s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (20.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1027 21:53:25.677334  485668 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1027 21:53:25.677453  485668 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-503153
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-503153: exit status 85 (77.498242ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-503153 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-503153 │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 21:53:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 21:53:04.761744  485680 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:53:04.762263  485680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:53:04.762282  485680 out.go:374] Setting ErrFile to fd 2...
	I1027 21:53:04.762288  485680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:53:04.762753  485680 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	W1027 21:53:04.763071  485680 root.go:316] Error reading config file at /home/jenkins/minikube-integration/21790-482142/.minikube/config/config.json: open /home/jenkins/minikube-integration/21790-482142/.minikube/config/config.json: no such file or directory
	I1027 21:53:04.763860  485680 out.go:368] Setting JSON to true
	I1027 21:53:04.764760  485680 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5724,"bootTime":1761596261,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 21:53:04.764857  485680 start.go:143] virtualization: kvm guest
	I1027 21:53:04.766672  485680 out.go:99] [download-only-503153] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1027 21:53:04.766812  485680 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball: no such file or directory
	I1027 21:53:04.766873  485680 notify.go:221] Checking for updates...
	I1027 21:53:04.767779  485680 out.go:171] MINIKUBE_LOCATION=21790
	I1027 21:53:04.768794  485680 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 21:53:04.769778  485680 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 21:53:04.770737  485680 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 21:53:04.771736  485680 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1027 21:53:04.773415  485680 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 21:53:04.773665  485680 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 21:53:04.797806  485680 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 21:53:04.797939  485680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 21:53:04.856731  485680 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-27 21:53:04.847091626 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 21:53:04.856855  485680 docker.go:318] overlay module found
	I1027 21:53:04.858050  485680 out.go:99] Using the docker driver based on user configuration
	I1027 21:53:04.858077  485680 start.go:307] selected driver: docker
	I1027 21:53:04.858084  485680 start.go:928] validating driver "docker" against <nil>
	I1027 21:53:04.858190  485680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 21:53:04.917835  485680 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-27 21:53:04.907464638 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 21:53:04.918040  485680 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 21:53:04.918631  485680 start_flags.go:409] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1027 21:53:04.918781  485680 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 21:53:04.920498  485680 out.go:171] Using Docker driver with root privileges
	I1027 21:53:04.921386  485680 cni.go:84] Creating CNI manager for ""
	I1027 21:53:04.921453  485680 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 21:53:04.921465  485680 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 21:53:04.921544  485680 start.go:351] cluster config:
	{Name:download-only-503153 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-503153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 21:53:04.922606  485680 out.go:99] Starting "download-only-503153" primary control-plane node in "download-only-503153" cluster
	I1027 21:53:04.922629  485680 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 21:53:04.923528  485680 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 21:53:04.923553  485680 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 21:53:04.923671  485680 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 21:53:04.941062  485680 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 21:53:04.941287  485680 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 21:53:04.941385  485680 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 21:53:05.030572  485680 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1027 21:53:05.030624  485680 cache.go:59] Caching tarball of preloaded images
	I1027 21:53:05.030811  485680 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 21:53:05.032500  485680 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1027 21:53:05.032525  485680 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1027 21:53:05.146264  485680 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1027 21:53:05.146418  485680 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1027 21:53:10.833022  485680 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	
	
	* The control-plane node download-only-503153 host does not exist
	  To start a cluster, run: "minikube start -p download-only-503153"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-503153
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (13.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-844553 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-844553 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.6055818s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (13.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1027 21:53:39.758828  485668 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1027 21:53:39.758876  485668 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-844553
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-844553: exit status 85 (78.227388ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-503153 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-503153 │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ delete  │ -p download-only-503153                                                                                                                                                   │ download-only-503153 │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │ 27 Oct 25 21:53 UTC │
	│ start   │ -o=json --download-only -p download-only-844553 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-844553 │ jenkins │ v1.37.0 │ 27 Oct 25 21:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 21:53:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 21:53:26.208981  486075 out.go:360] Setting OutFile to fd 1 ...
	I1027 21:53:26.209284  486075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:53:26.209295  486075 out.go:374] Setting ErrFile to fd 2...
	I1027 21:53:26.209300  486075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 21:53:26.209542  486075 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 21:53:26.210085  486075 out.go:368] Setting JSON to true
	I1027 21:53:26.211022  486075 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5745,"bootTime":1761596261,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 21:53:26.211130  486075 start.go:143] virtualization: kvm guest
	I1027 21:53:26.212683  486075 out.go:99] [download-only-844553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 21:53:26.212868  486075 notify.go:221] Checking for updates...
	I1027 21:53:26.213865  486075 out.go:171] MINIKUBE_LOCATION=21790
	I1027 21:53:26.214927  486075 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 21:53:26.215984  486075 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 21:53:26.216891  486075 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 21:53:26.217769  486075 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1027 21:53:26.219302  486075 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 21:53:26.219568  486075 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 21:53:26.243617  486075 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 21:53:26.243720  486075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 21:53:26.306071  486075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:51 SystemTime:2025-10-27 21:53:26.29505977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 21:53:26.306194  486075 docker.go:318] overlay module found
	I1027 21:53:26.307643  486075 out.go:99] Using the docker driver based on user configuration
	I1027 21:53:26.307673  486075 start.go:307] selected driver: docker
	I1027 21:53:26.307683  486075 start.go:928] validating driver "docker" against <nil>
	I1027 21:53:26.307831  486075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 21:53:26.371344  486075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:51 SystemTime:2025-10-27 21:53:26.360360349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 21:53:26.371532  486075 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 21:53:26.372046  486075 start_flags.go:409] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1027 21:53:26.372207  486075 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 21:53:26.373546  486075 out.go:171] Using Docker driver with root privileges
	I1027 21:53:26.374434  486075 cni.go:84] Creating CNI manager for ""
	I1027 21:53:26.374531  486075 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 21:53:26.374548  486075 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 21:53:26.374656  486075 start.go:351] cluster config:
	{Name:download-only-844553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-844553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 21:53:26.375702  486075 out.go:99] Starting "download-only-844553" primary control-plane node in "download-only-844553" cluster
	I1027 21:53:26.375719  486075 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 21:53:26.376618  486075 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 21:53:26.376652  486075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 21:53:26.376757  486075 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 21:53:26.393756  486075 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 21:53:26.393885  486075 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 21:53:26.393903  486075 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 21:53:26.393908  486075 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 21:53:26.393916  486075 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 21:53:26.487018  486075 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 21:53:26.487056  486075 cache.go:59] Caching tarball of preloaded images
	I1027 21:53:26.487287  486075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 21:53:26.488830  486075 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1027 21:53:26.488857  486075 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1027 21:53:26.604521  486075 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1027 21:53:26.604576  486075 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21790-482142/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-844553 host does not exist
	  To start a cluster, run: "minikube start -p download-only-844553"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-844553
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.85s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-726727 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-726727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-726727
--- PASS: TestDownloadOnlyKic (0.85s)

                                                
                                    
x
+
TestBinaryMirror (2.42s)

                                                
                                                
=== RUN   TestBinaryMirror
I1027 21:53:41.420914  485668 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-240698 --alsologtostderr --binary-mirror http://127.0.0.1:35931 --driver=docker  --container-runtime=crio
aaa_download_only_test.go:309: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-240698 --alsologtostderr --binary-mirror http://127.0.0.1:35931 --driver=docker  --container-runtime=crio: (1.615034318s)
helpers_test.go:175: Cleaning up "binary-mirror-240698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-240698
--- PASS: TestBinaryMirror (2.42s)

                                                
                                    
x
+
TestOffline (57.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-037558 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-037558 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (54.579934793s)
helpers_test.go:175: Cleaning up "offline-crio-037558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-037558
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-037558: (2.545290149s)
--- PASS: TestOffline (57.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-681393
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-681393: exit status 85 (303.756413ms)

                                                
                                                
-- stdout --
	* Profile "addons-681393" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-681393"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-681393
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-681393: exit status 85 (302.708611ms)

                                                
                                                
-- stdout --
	* Profile "addons-681393" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-681393"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/Setup (126.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-681393 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-681393 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.772183016s)
--- PASS: TestAddons/Setup (126.77s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-681393 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-681393 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-681393 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-681393 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d42e239b-3156-4365-aa06-9d3e832e54db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d42e239b-3156-4365-aa06-9d3e832e54db] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004362399s
addons_test.go:694: (dbg) Run:  kubectl --context addons-681393 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-681393 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-681393 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.81s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-681393
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-681393: (16.533061555s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-681393
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-681393
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-681393
--- PASS: TestAddons/StoppedEnableDisable (16.81s)

                                                
                                    
x
+
TestCertOptions (26.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-175944 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-175944 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.753179717s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-175944 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-175944 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-175944 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-175944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-175944
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-175944: (2.609402907s)
--- PASS: TestCertOptions (26.11s)

                                                
                                    
x
+
TestCertExpiration (216.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-219241 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-219241 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.623259594s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-219241 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-219241 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.290455952s)
helpers_test.go:175: Cleaning up "cert-expiration-219241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-219241
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-219241: (3.028952653s)
--- PASS: TestCertExpiration (216.94s)

                                                
                                    
x
+
TestForceSystemdFlag (36.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-209757 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-209757 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.763091226s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-209757 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-209757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-209757
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-209757: (5.220053724s)
--- PASS: TestForceSystemdFlag (36.30s)

                                                
                                    
x
+
TestForceSystemdEnv (40.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-078908 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-078908 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.411325463s)
helpers_test.go:175: Cleaning up "force-systemd-env-078908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-078908
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-078908: (2.774998478s)
--- PASS: TestForceSystemdEnv (40.19s)

                                                
                                    
x
+
TestErrorSpam/setup (23.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-787153 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-787153 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-787153 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-787153 --driver=docker  --container-runtime=crio: (23.616295021s)
--- PASS: TestErrorSpam/setup (23.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (6.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 pause: exit status 80 (1.989732905s)

                                                
                                                
-- stdout --
	* Pausing node nospam-787153 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:59:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 pause: exit status 80 (2.390613393s)

                                                
                                                
-- stdout --
	* Pausing node nospam-787153 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:59:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 pause: exit status 80 (1.957128808s)

                                                
                                                
-- stdout --
	* Pausing node nospam-787153 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:59:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 unpause: exit status 80 (1.828067401s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-787153 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:59:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 unpause: exit status 80 (2.177836539s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-787153 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:59:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 unpause: exit status 80 (1.713269862s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-787153 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T21:59:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.72s)

                                                
                                    
x
+
TestErrorSpam/stop (18.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 stop: (17.874849005s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-787153 --log_dir /tmp/nospam-787153 stop
--- PASS: TestErrorSpam/stop (18.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21790-482142/.minikube/files/etc/test/nested/copy/485668/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-287960 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-287960 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (42.081497372s)
--- PASS: TestFunctional/serial/StartWithProxy (42.08s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1027 22:00:44.683909  485668 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-287960 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-287960 --alsologtostderr -v=8: (6.429396206s)
functional_test.go:678: soft start took 6.430183478s for "functional-287960" cluster.
I1027 22:00:51.116096  485668 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-287960 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 cache add registry.k8s.io/pause:3.1
E1027 22:00:51.499547  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:00:51.505995  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:00:51.517477  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:00:51.538953  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:00:51.580492  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:00:51.662043  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:00:51.823652  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 cache add registry.k8s.io/pause:3.3
E1027 22:00:52.145010  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:00:52.787189  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-287960 /tmp/TestFunctionalserialCacheCmdcacheadd_local2575726513/001
E1027 22:00:54.068805  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 cache add minikube-local-cache-test:functional-287960
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-287960 cache add minikube-local-cache-test:functional-287960: (1.638942405s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 cache delete minikube-local-cache-test:functional-287960
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-287960
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1027 22:00:56.630634  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.073679ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 kubectl -- --context functional-287960 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-287960 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-287960 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1027 22:01:01.752494  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:01:11.993770  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:01:32.475120  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-287960 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.920544822s)
functional_test.go:776: restart took 40.920673515s for "functional-287960" cluster.
I1027 22:01:39.080433  485668 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (40.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-287960 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-287960 logs: (1.188499682s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 logs --file /tmp/TestFunctionalserialLogsFileCmd779939347/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-287960 logs --file /tmp/TestFunctionalserialLogsFileCmd779939347/001/logs.txt: (1.263387765s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-287960 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-287960
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-287960: exit status 115 (345.444014ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32468 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-287960 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 config get cpus: exit status 14 (85.297679ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 config get cpus: exit status 14 (74.767946ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-287960 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-287960 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 520490: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-287960 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-287960 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (178.27091ms)

                                                
                                                
-- stdout --
	* [functional-287960] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:01:48.608937  519870 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:01:48.609136  519870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:01:48.609148  519870 out.go:374] Setting ErrFile to fd 2...
	I1027 22:01:48.609156  519870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:01:48.609465  519870 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:01:48.610170  519870 out.go:368] Setting JSON to false
	I1027 22:01:48.611498  519870 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6248,"bootTime":1761596261,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:01:48.611591  519870 start.go:143] virtualization: kvm guest
	I1027 22:01:48.613311  519870 out.go:179] * [functional-287960] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:01:48.615244  519870 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:01:48.615231  519870 notify.go:221] Checking for updates...
	I1027 22:01:48.617063  519870 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:01:48.618098  519870 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:01:48.619225  519870 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:01:48.620327  519870 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:01:48.621598  519870 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:01:48.623119  519870 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:01:48.623700  519870 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:01:48.648452  519870 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:01:48.648581  519870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:01:48.706923  519870 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-27 22:01:48.696567634 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:01:48.707043  519870 docker.go:318] overlay module found
	I1027 22:01:48.708718  519870 out.go:179] * Using the docker driver based on existing profile
	I1027 22:01:48.709632  519870 start.go:307] selected driver: docker
	I1027 22:01:48.709645  519870 start.go:928] validating driver "docker" against &{Name:functional-287960 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-287960 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:01:48.709737  519870 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:01:48.711183  519870 out.go:203] 
	W1027 22:01:48.712106  519870 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1027 22:01:48.713066  519870 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-287960 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-287960 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-287960 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (166.815085ms)

                                                
                                                
-- stdout --
	* [functional-287960] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:01:48.431652  519769 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:01:48.431908  519769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:01:48.431917  519769 out.go:374] Setting ErrFile to fd 2...
	I1027 22:01:48.431922  519769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:01:48.432245  519769 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:01:48.432690  519769 out.go:368] Setting JSON to false
	I1027 22:01:48.433661  519769 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6247,"bootTime":1761596261,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:01:48.433776  519769 start.go:143] virtualization: kvm guest
	I1027 22:01:48.435790  519769 out.go:179] * [functional-287960] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1027 22:01:48.436851  519769 notify.go:221] Checking for updates...
	I1027 22:01:48.436877  519769 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:01:48.437937  519769 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:01:48.439150  519769 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:01:48.440348  519769 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:01:48.441461  519769 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:01:48.442449  519769 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:01:48.444016  519769 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:01:48.444763  519769 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:01:48.468086  519769 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:01:48.468214  519769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:01:48.528909  519769 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-27 22:01:48.518384156 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:01:48.529055  519769 docker.go:318] overlay module found
	I1027 22:01:48.530701  519769 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1027 22:01:48.531676  519769 start.go:307] selected driver: docker
	I1027 22:01:48.531690  519769 start.go:928] validating driver "docker" against &{Name:functional-287960 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-287960 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:01:48.531778  519769 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:01:48.533214  519769 out.go:203] 
	W1027 22:01:48.534175  519769 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1027 22:01:48.535055  519769 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c6492442-73a6-43c9-9bc1-43ea72a073c6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004097836s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-287960 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-287960 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-287960 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-287960 apply -f testdata/storage-provisioner/pod.yaml
I1027 22:01:52.001490  485668 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [90ca34a2-8686-480f-b66e-585a7e5de9d0] Pending
helpers_test.go:352: "sp-pod" [90ca34a2-8686-480f-b66e-585a7e5de9d0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [90ca34a2-8686-480f-b66e-585a7e5de9d0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.003763643s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-287960 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-287960 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-287960 apply -f testdata/storage-provisioner/pod.yaml
I1027 22:02:08.834558  485668 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2a1b584e-1086-4b2c-a3da-03a0b99f92ad] Pending
helpers_test.go:352: "sp-pod" [2a1b584e-1086-4b2c-a3da-03a0b99f92ad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2a1b584e-1086-4b2c-a3da-03a0b99f92ad] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004708335s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-287960 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "echo hello"
2025/10/27 22:01:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh -n functional-287960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 cp functional-287960:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1331536719/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh -n functional-287960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh -n functional-287960 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-287960 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-qr9vk" [57a59e8f-b549-405c-8d66-ff116d79b4ac] Pending
helpers_test.go:352: "mysql-5bb876957f-qr9vk" [57a59e8f-b549-405c-8d66-ff116d79b4ac] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1027 22:02:13.437209  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "mysql-5bb876957f-qr9vk" [57a59e8f-b549-405c-8d66-ff116d79b4ac] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003966203s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-287960 exec mysql-5bb876957f-qr9vk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-287960 exec mysql-5bb876957f-qr9vk -- mysql -ppassword -e "show databases;": exit status 1 (86.066303ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 22:02:24.567293  485668 retry.go:31] will retry after 1.317935506s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-287960 exec mysql-5bb876957f-qr9vk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-287960 exec mysql-5bb876957f-qr9vk -- mysql -ppassword -e "show databases;": exit status 1 (88.973977ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 22:02:25.975015  485668 retry.go:31] will retry after 1.277981169s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-287960 exec mysql-5bb876957f-qr9vk -- mysql -ppassword -e "show databases;"
E1027 22:03:35.359513  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:05:51.498709  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:06:19.201132  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:10:51.499375  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (17.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/485668/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo cat /etc/test/nested/copy/485668/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/485668.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo cat /etc/ssl/certs/485668.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/485668.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo cat /usr/share/ca-certificates/485668.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4856682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo cat /etc/ssl/certs/4856682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4856682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo cat /usr/share/ca-certificates/4856682.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-287960 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 ssh "sudo systemctl is-active docker": exit status 1 (332.377587ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 ssh "sudo systemctl is-active containerd": exit status 1 (401.01638ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "370.117261ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "77.17206ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "361.139738ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.064116ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdany-port4136602682/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761602507345023267" to /tmp/TestFunctionalparallelMountCmdany-port4136602682/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761602507345023267" to /tmp/TestFunctionalparallelMountCmdany-port4136602682/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761602507345023267" to /tmp/TestFunctionalparallelMountCmdany-port4136602682/001/test-1761602507345023267
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.60158ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:01:47.644915  485668 retry.go:31] will retry after 537.087266ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 27 22:01 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 27 22:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 27 22:01 test-1761602507345023267
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh cat /mount-9p/test-1761602507345023267
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-287960 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [4716ca72-c637-4e56-90b2-21c77e91f627] Pending
helpers_test.go:352: "busybox-mount" [4716ca72-c637-4e56-90b2-21c77e91f627] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [4716ca72-c637-4e56-90b2-21c77e91f627] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [4716ca72-c637-4e56-90b2-21c77e91f627] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004223119s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-287960 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdany-port4136602682/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdspecific-port378784643/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.708809ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:01:55.523831  485668 retry.go:31] will retry after 458.362107ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdspecific-port378784643/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 ssh "sudo umount -f /mount-9p": exit status 1 (335.400155ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-287960 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdspecific-port378784643/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2646782202/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2646782202/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2646782202/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T" /mount1: exit status 1 (422.538357ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:01:57.594300  485668 retry.go:31] will retry after 250.69396ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-287960 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2646782202/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2646782202/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-287960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2646782202/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-287960 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-287960 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-287960 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-287960 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 522856: os: process already finished
helpers_test.go:525: unable to kill pid 522609: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-287960 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-287960 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6d44d901-20b4-498b-8f08-51c22004bff1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6d44d901-20b4-498b-8f08-51c22004bff1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004125173s
I1027 22:02:10.154086  485668 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-287960 image ls --format short --alsologtostderr: (1.924381766s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-287960 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-287960 image ls --format short --alsologtostderr:
I1027 22:02:16.673081  525809 out.go:360] Setting OutFile to fd 1 ...
I1027 22:02:16.673199  525809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:16.673210  525809 out.go:374] Setting ErrFile to fd 2...
I1027 22:02:16.673217  525809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:16.673483  525809 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
I1027 22:02:16.674130  525809 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:16.674235  525809 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:16.674760  525809 cli_runner.go:164] Run: docker container inspect functional-287960 --format={{.State.Status}}
I1027 22:02:16.694303  525809 ssh_runner.go:195] Run: systemctl --version
I1027 22:02:16.694385  525809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-287960
I1027 22:02:16.714569  525809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/functional-287960/id_rsa Username:docker}
I1027 22:02:16.825429  525809 ssh_runner.go:195] Run: sudo crictl images --output json
I1027 22:02:18.521588  525809 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.696120239s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-287960 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-287960 image ls --format table --alsologtostderr:
I1027 22:02:18.829307  526033 out.go:360] Setting OutFile to fd 1 ...
I1027 22:02:18.829598  526033 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:18.829606  526033 out.go:374] Setting ErrFile to fd 2...
I1027 22:02:18.829610  526033 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:18.829812  526033 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
I1027 22:02:18.830392  526033 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:18.830490  526033 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:18.830857  526033 cli_runner.go:164] Run: docker container inspect functional-287960 --format={{.State.Status}}
I1027 22:02:18.848445  526033 ssh_runner.go:195] Run: systemctl --version
I1027 22:02:18.848513  526033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-287960
I1027 22:02:18.865775  526033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/functional-287960/id_rsa Username:docker}
I1027 22:02:18.964451  526033 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-287960 image ls --format json --alsologtostderr:
[{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"7dd6aaa1717ab7eaae4578503e4c4
d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/
mysql:5.7"],"size":"519571821"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["d
ocker.io/library/nginx:latest"],"size":"155467611"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha25
6:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd
04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provision
er:v5"],"size":"31470524"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-287960 image ls --format json --alsologtostderr:
I1027 22:02:18.593573  525945 out.go:360] Setting OutFile to fd 1 ...
I1027 22:02:18.593858  525945 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:18.593869  525945 out.go:374] Setting ErrFile to fd 2...
I1027 22:02:18.593874  525945 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:18.594067  525945 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
I1027 22:02:18.594659  525945 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:18.594755  525945 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:18.595133  525945 cli_runner.go:164] Run: docker container inspect functional-287960 --format={{.State.Status}}
I1027 22:02:18.614211  525945 ssh_runner.go:195] Run: systemctl --version
I1027 22:02:18.614281  525945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-287960
I1027 22:02:18.632459  525945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/functional-287960/id_rsa Username:docker}
I1027 22:02:18.733546  525945 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-287960 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8
repoTags:
- docker.io/library/nginx:latest
size: "155467611"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-287960 image ls --format yaml --alsologtostderr:
I1027 22:02:19.059615  526121 out.go:360] Setting OutFile to fd 1 ...
I1027 22:02:19.059887  526121 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:19.059898  526121 out.go:374] Setting ErrFile to fd 2...
I1027 22:02:19.059902  526121 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:19.060166  526121 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
I1027 22:02:19.060801  526121 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:19.060907  526121 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:19.061284  526121 cli_runner.go:164] Run: docker container inspect functional-287960 --format={{.State.Status}}
I1027 22:02:19.080296  526121 ssh_runner.go:195] Run: systemctl --version
I1027 22:02:19.080574  526121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-287960
I1027 22:02:19.098073  526121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/functional-287960/id_rsa Username:docker}
I1027 22:02:19.199287  526121 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-287960 ssh pgrep buildkitd: exit status 1 (280.367885ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image build -t localhost/my-image:functional-287960 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-287960 image build -t localhost/my-image:functional-287960 testdata/build --alsologtostderr: (3.716288759s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-287960 image build -t localhost/my-image:functional-287960 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4cbae89d9eb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-287960
--> 02d4cbf7ce7
Successfully tagged localhost/my-image:functional-287960
02d4cbf7ce71270de434cf3af2a13d383165b7bd20cd099956cb49da8cfdb182
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-287960 image build -t localhost/my-image:functional-287960 testdata/build --alsologtostderr:
I1027 22:02:19.574217  526339 out.go:360] Setting OutFile to fd 1 ...
I1027 22:02:19.574578  526339 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:19.574589  526339 out.go:374] Setting ErrFile to fd 2...
I1027 22:02:19.574594  526339 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:02:19.574814  526339 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
I1027 22:02:19.575410  526339 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:19.576084  526339 config.go:182] Loaded profile config "functional-287960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:02:19.576462  526339 cli_runner.go:164] Run: docker container inspect functional-287960 --format={{.State.Status}}
I1027 22:02:19.596701  526339 ssh_runner.go:195] Run: systemctl --version
I1027 22:02:19.596759  526339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-287960
I1027 22:02:19.615053  526339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/functional-287960/id_rsa Username:docker}
I1027 22:02:19.715264  526339 build_images.go:162] Building image from path: /tmp/build.1794006464.tar
I1027 22:02:19.715340  526339 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1027 22:02:19.724525  526339 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1794006464.tar
I1027 22:02:19.728708  526339 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1794006464.tar: stat -c "%s %y" /var/lib/minikube/build/build.1794006464.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1794006464.tar': No such file or directory
I1027 22:02:19.728746  526339 ssh_runner.go:362] scp /tmp/build.1794006464.tar --> /var/lib/minikube/build/build.1794006464.tar (3072 bytes)
I1027 22:02:19.748283  526339 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1794006464
I1027 22:02:19.756550  526339 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1794006464 -xf /var/lib/minikube/build/build.1794006464.tar
I1027 22:02:19.764901  526339 crio.go:315] Building image: /var/lib/minikube/build/build.1794006464
I1027 22:02:19.764984  526339 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-287960 /var/lib/minikube/build/build.1794006464 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1027 22:02:23.209063  526339 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-287960 /var/lib/minikube/build/build.1794006464 --cgroup-manager=cgroupfs: (3.444049261s)
I1027 22:02:23.209135  526339 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1794006464
I1027 22:02:23.217292  526339 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1794006464.tar
I1027 22:02:23.224819  526339 build_images.go:218] Built localhost/my-image:functional-287960 from /tmp/build.1794006464.tar
I1027 22:02:23.224848  526339 build_images.go:134] succeeded building to: functional-287960
I1027 22:02:23.224854  526339 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.96140488s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-287960
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image rm kicbase/echo-server:functional-287960 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-287960 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.224.12 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-287960 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-287960 service list: (1.69851449s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-287960 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-287960 service list -o json: (1.693952158s)
functional_test.go:1504: Took "1.694044148s" to run "out/minikube-linux-amd64 -p functional-287960 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-287960
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-287960
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-287960
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (121.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m1.121590445s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (121.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 kubectl -- rollout status deployment/busybox: (3.144523543s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-6b447 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-lkfh2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-xb8zl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-6b447 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-lkfh2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-xb8zl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-6b447 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-lkfh2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-xb8zl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-6b447 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-6b447 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-lkfh2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-lkfh2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-xb8zl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 kubectl -- exec busybox-7b57f96db7-xb8zl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 node add --alsologtostderr -v 5: (26.161983932s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-278480 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp testdata/cp-test.txt ha-278480:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1489503563/001/cp-test_ha-278480.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480:/home/docker/cp-test.txt ha-278480-m02:/home/docker/cp-test_ha-278480_ha-278480-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m02 "sudo cat /home/docker/cp-test_ha-278480_ha-278480-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480:/home/docker/cp-test.txt ha-278480-m03:/home/docker/cp-test_ha-278480_ha-278480-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m03 "sudo cat /home/docker/cp-test_ha-278480_ha-278480-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480:/home/docker/cp-test.txt ha-278480-m04:/home/docker/cp-test_ha-278480_ha-278480-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m04 "sudo cat /home/docker/cp-test_ha-278480_ha-278480-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp testdata/cp-test.txt ha-278480-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1489503563/001/cp-test_ha-278480-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m02:/home/docker/cp-test.txt ha-278480:/home/docker/cp-test_ha-278480-m02_ha-278480.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480 "sudo cat /home/docker/cp-test_ha-278480-m02_ha-278480.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m02:/home/docker/cp-test.txt ha-278480-m03:/home/docker/cp-test_ha-278480-m02_ha-278480-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m03 "sudo cat /home/docker/cp-test_ha-278480-m02_ha-278480-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m02:/home/docker/cp-test.txt ha-278480-m04:/home/docker/cp-test_ha-278480-m02_ha-278480-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m04 "sudo cat /home/docker/cp-test_ha-278480-m02_ha-278480-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp testdata/cp-test.txt ha-278480-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1489503563/001/cp-test_ha-278480-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m03:/home/docker/cp-test.txt ha-278480:/home/docker/cp-test_ha-278480-m03_ha-278480.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480 "sudo cat /home/docker/cp-test_ha-278480-m03_ha-278480.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m03:/home/docker/cp-test.txt ha-278480-m02:/home/docker/cp-test_ha-278480-m03_ha-278480-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m02 "sudo cat /home/docker/cp-test_ha-278480-m03_ha-278480-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m03:/home/docker/cp-test.txt ha-278480-m04:/home/docker/cp-test_ha-278480-m03_ha-278480-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m04 "sudo cat /home/docker/cp-test_ha-278480-m03_ha-278480-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp testdata/cp-test.txt ha-278480-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1489503563/001/cp-test_ha-278480-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m04:/home/docker/cp-test.txt ha-278480:/home/docker/cp-test_ha-278480-m04_ha-278480.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480 "sudo cat /home/docker/cp-test_ha-278480-m04_ha-278480.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m04:/home/docker/cp-test.txt ha-278480-m02:/home/docker/cp-test_ha-278480-m04_ha-278480-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m02 "sudo cat /home/docker/cp-test_ha-278480-m04_ha-278480-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 cp ha-278480-m04:/home/docker/cp-test.txt ha-278480-m03:/home/docker/cp-test_ha-278480-m04_ha-278480-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 ssh -n ha-278480-m03 "sudo cat /home/docker/cp-test_ha-278480-m04_ha-278480-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 node stop m02 --alsologtostderr -v 5: (18.616477902s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-278480 status --alsologtostderr -v 5: exit status 7 (727.207705ms)

                                                
                                                
-- stdout --
	ha-278480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-278480-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-278480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-278480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:15:27.901071  550867 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:15:27.901408  550867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:15:27.901417  550867 out.go:374] Setting ErrFile to fd 2...
	I1027 22:15:27.901423  550867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:15:27.901705  550867 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:15:27.902000  550867 out.go:368] Setting JSON to false
	I1027 22:15:27.902028  550867 mustload.go:66] Loading cluster: ha-278480
	I1027 22:15:27.902276  550867 notify.go:221] Checking for updates...
	I1027 22:15:27.903990  550867 config.go:182] Loaded profile config "ha-278480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:15:27.908030  550867 status.go:174] checking status of ha-278480 ...
	I1027 22:15:27.909070  550867 cli_runner.go:164] Run: docker container inspect ha-278480 --format={{.State.Status}}
	I1027 22:15:27.936408  550867 status.go:371] ha-278480 host status = "Running" (err=<nil>)
	I1027 22:15:27.936454  550867 host.go:66] Checking if "ha-278480" exists ...
	I1027 22:15:27.936826  550867 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278480
	I1027 22:15:27.955543  550867 host.go:66] Checking if "ha-278480" exists ...
	I1027 22:15:27.955901  550867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:15:27.955994  550867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278480
	I1027 22:15:27.973533  550867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/ha-278480/id_rsa Username:docker}
	I1027 22:15:28.073010  550867 ssh_runner.go:195] Run: systemctl --version
	I1027 22:15:28.079991  550867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:15:28.093091  550867 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:15:28.154672  550867 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 22:15:28.144871924 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:15:28.155364  550867 kubeconfig.go:125] found "ha-278480" server: "https://192.168.49.254:8443"
	I1027 22:15:28.155407  550867 api_server.go:166] Checking apiserver status ...
	I1027 22:15:28.155450  550867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:15:28.167135  550867 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	W1027 22:15:28.176896  550867 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:15:28.176984  550867 ssh_runner.go:195] Run: ls
	I1027 22:15:28.181007  550867 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1027 22:15:28.185656  550867 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1027 22:15:28.185682  550867 status.go:463] ha-278480 apiserver status = Running (err=<nil>)
	I1027 22:15:28.185696  550867 status.go:176] ha-278480 status: &{Name:ha-278480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:15:28.185716  550867 status.go:174] checking status of ha-278480-m02 ...
	I1027 22:15:28.185976  550867 cli_runner.go:164] Run: docker container inspect ha-278480-m02 --format={{.State.Status}}
	I1027 22:15:28.203381  550867 status.go:371] ha-278480-m02 host status = "Stopped" (err=<nil>)
	I1027 22:15:28.203406  550867 status.go:384] host is not running, skipping remaining checks
	I1027 22:15:28.203414  550867 status.go:176] ha-278480-m02 status: &{Name:ha-278480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:15:28.203465  550867 status.go:174] checking status of ha-278480-m03 ...
	I1027 22:15:28.203718  550867 cli_runner.go:164] Run: docker container inspect ha-278480-m03 --format={{.State.Status}}
	I1027 22:15:28.222223  550867 status.go:371] ha-278480-m03 host status = "Running" (err=<nil>)
	I1027 22:15:28.222254  550867 host.go:66] Checking if "ha-278480-m03" exists ...
	I1027 22:15:28.222584  550867 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278480-m03
	I1027 22:15:28.240186  550867 host.go:66] Checking if "ha-278480-m03" exists ...
	I1027 22:15:28.240447  550867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:15:28.240488  550867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278480-m03
	I1027 22:15:28.257563  550867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/ha-278480-m03/id_rsa Username:docker}
	I1027 22:15:28.355887  550867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:15:28.368880  550867 kubeconfig.go:125] found "ha-278480" server: "https://192.168.49.254:8443"
	I1027 22:15:28.368914  550867 api_server.go:166] Checking apiserver status ...
	I1027 22:15:28.368983  550867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:15:28.380058  550867 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup
	W1027 22:15:28.388704  550867 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:15:28.388755  550867 ssh_runner.go:195] Run: ls
	I1027 22:15:28.392644  550867 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1027 22:15:28.398730  550867 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1027 22:15:28.398753  550867 status.go:463] ha-278480-m03 apiserver status = Running (err=<nil>)
	I1027 22:15:28.398762  550867 status.go:176] ha-278480-m03 status: &{Name:ha-278480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:15:28.398781  550867 status.go:174] checking status of ha-278480-m04 ...
	I1027 22:15:28.399108  550867 cli_runner.go:164] Run: docker container inspect ha-278480-m04 --format={{.State.Status}}
	I1027 22:15:28.416445  550867 status.go:371] ha-278480-m04 host status = "Running" (err=<nil>)
	I1027 22:15:28.416470  550867 host.go:66] Checking if "ha-278480-m04" exists ...
	I1027 22:15:28.416751  550867 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278480-m04
	I1027 22:15:28.435555  550867 host.go:66] Checking if "ha-278480-m04" exists ...
	I1027 22:15:28.435828  550867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:15:28.435869  550867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278480-m04
	I1027 22:15:28.453249  550867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/ha-278480-m04/id_rsa Username:docker}
	I1027 22:15:28.550415  550867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:15:28.563055  550867 status.go:176] ha-278480-m04 status: &{Name:ha-278480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 node start m02 --alsologtostderr -v 5: (8.618131374s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (125.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 stop --alsologtostderr -v 5
E1027 22:15:51.500932  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 stop --alsologtostderr -v 5: (55.515593784s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 start --wait true --alsologtostderr -v 5
E1027 22:16:45.619519  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:45.625973  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:45.637347  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:45.658633  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:45.700056  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:45.781421  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:45.942892  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:46.265070  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:46.906781  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:48.189098  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:50.750821  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:16:55.872643  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:17:06.114558  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:17:14.562667  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:17:26.596635  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 start --wait true --alsologtostderr -v 5: (1m9.982145975s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (125.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 node delete m03 --alsologtostderr -v 5: (9.743208917s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (46.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 stop --alsologtostderr -v 5
E1027 22:18:07.559505  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 stop --alsologtostderr -v 5: (46.491866401s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-278480 status --alsologtostderr -v 5: exit status 7 (112.82503ms)

                                                
                                                
-- stdout --
	ha-278480
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-278480-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-278480-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:18:43.194993  565576 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:18:43.195221  565576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:18:43.195246  565576 out.go:374] Setting ErrFile to fd 2...
	I1027 22:18:43.195254  565576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:18:43.195423  565576 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:18:43.195586  565576 out.go:368] Setting JSON to false
	I1027 22:18:43.195614  565576 mustload.go:66] Loading cluster: ha-278480
	I1027 22:18:43.195673  565576 notify.go:221] Checking for updates...
	I1027 22:18:43.196225  565576 config.go:182] Loaded profile config "ha-278480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:18:43.196250  565576 status.go:174] checking status of ha-278480 ...
	I1027 22:18:43.196762  565576 cli_runner.go:164] Run: docker container inspect ha-278480 --format={{.State.Status}}
	I1027 22:18:43.214312  565576 status.go:371] ha-278480 host status = "Stopped" (err=<nil>)
	I1027 22:18:43.214333  565576 status.go:384] host is not running, skipping remaining checks
	I1027 22:18:43.214341  565576 status.go:176] ha-278480 status: &{Name:ha-278480 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:18:43.214389  565576 status.go:174] checking status of ha-278480-m02 ...
	I1027 22:18:43.214701  565576 cli_runner.go:164] Run: docker container inspect ha-278480-m02 --format={{.State.Status}}
	I1027 22:18:43.230907  565576 status.go:371] ha-278480-m02 host status = "Stopped" (err=<nil>)
	I1027 22:18:43.230922  565576 status.go:384] host is not running, skipping remaining checks
	I1027 22:18:43.230928  565576 status.go:176] ha-278480-m02 status: &{Name:ha-278480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:18:43.230956  565576 status.go:174] checking status of ha-278480-m04 ...
	I1027 22:18:43.231180  565576 cli_runner.go:164] Run: docker container inspect ha-278480-m04 --format={{.State.Status}}
	I1027 22:18:43.247129  565576 status.go:371] ha-278480-m04 host status = "Stopped" (err=<nil>)
	I1027 22:18:43.247160  565576 status.go:384] host is not running, skipping remaining checks
	I1027 22:18:43.247169  565576 status.go:176] ha-278480-m04 status: &{Name:ha-278480-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (46.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1027 22:19:29.481852  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.912791344s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-278480 node add --control-plane --alsologtostderr -v 5: (36.439068318s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-278480 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-934737 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1027 22:20:51.498735  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-934737 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (39.625122197s)
--- PASS: TestJSONOutput/start/Command (39.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-934737 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-934737 --output=json --user=testUser: (6.155047973s)
--- PASS: TestJSONOutput/stop/Command (6.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-096856 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-096856 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.420845ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"80d89b67-99ff-4583-ab9f-07421b47a732","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-096856] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"96d7e01c-5681-49f4-af13-1b6987e2207d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21790"}}
	{"specversion":"1.0","id":"a0463beb-2656-4a78-9932-373c4c865959","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6f877e88-25fb-41d7-94c2-34285c666eee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig"}}
	{"specversion":"1.0","id":"93088368-4294-4dc4-9ba4-870b9dd5dd21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube"}}
	{"specversion":"1.0","id":"403f5061-8d8f-4655-9a8b-58194cd90f5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0079b0cb-3514-4b39-b0f8-d1750f2a51b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a92c930-4b4b-4a33-ac6d-919d1271e7c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-096856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-096856
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-095067 --network=
E1027 22:21:45.622104  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-095067 --network=: (37.720207153s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-095067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-095067
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-095067: (2.13341079s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.87s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.21s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-218087 --network=bridge
E1027 22:22:13.323281  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-218087 --network=bridge: (20.187681651s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-218087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-218087
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-218087: (2.002302494s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.21s)

                                                
                                    
x
+
TestKicExistingNetwork (24.37s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1027 22:22:21.613801  485668 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1027 22:22:21.630852  485668 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1027 22:22:21.630965  485668 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1027 22:22:21.631005  485668 cli_runner.go:164] Run: docker network inspect existing-network
W1027 22:22:21.647527  485668 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1027 22:22:21.647554  485668 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1027 22:22:21.647577  485668 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1027 22:22:21.647690  485668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1027 22:22:21.664740  485668 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d433cca18beb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:32:49:29:e3:17} reservation:<nil>}
I1027 22:22:21.665211  485668 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dd90f0}
I1027 22:22:21.665251  485668 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1027 22:22:21.665303  485668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1027 22:22:21.724429  485668 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-460990 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-460990 --network=existing-network: (22.200490203s)
helpers_test.go:175: Cleaning up "existing-network-460990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-460990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-460990: (2.023652433s)
I1027 22:22:45.965138  485668 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.37s)

                                                
                                    
x
+
TestKicCustomSubnet (26.36s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-602934 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-602934 --subnet=192.168.60.0/24: (24.178930442s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-602934 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-602934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-602934
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-602934: (2.154742414s)
--- PASS: TestKicCustomSubnet (26.36s)

                                                
                                    
x
+
TestKicStaticIP (27.37s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-083514 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-083514 --static-ip=192.168.200.200: (25.064559347s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-083514 ip
helpers_test.go:175: Cleaning up "static-ip-083514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-083514
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-083514: (2.155400211s)
--- PASS: TestKicStaticIP (27.37s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (46.39s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-592041 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-592041 --driver=docker  --container-runtime=crio: (20.038213834s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-593829 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-593829 --driver=docker  --container-runtime=crio: (20.309406855s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-592041
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-593829
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-593829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-593829
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-593829: (2.403220236s)
helpers_test.go:175: Cleaning up "first-592041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-592041
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-592041: (2.387772446s)
--- PASS: TestMinikubeProfile (46.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-577712 --memory=3072 --mount-string /tmp/TestMountStartserial75398977/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-577712 --memory=3072 --mount-string /tmp/TestMountStartserial75398977/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.75813977s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-577712 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-592178 --memory=3072 --mount-string /tmp/TestMountStartserial75398977/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-592178 --memory=3072 --mount-string /tmp/TestMountStartserial75398977/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.928061937s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-592178 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-577712 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-577712 --alsologtostderr -v=5: (1.676891624s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-592178 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-592178
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-592178: (1.257294896s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-592178
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-592178: (6.761078579s)
--- PASS: TestMountStart/serial/RestartStopped (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-592178 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-666529 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1027 22:25:51.499081  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-666529 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m4.111202959s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-666529 -- rollout status deployment/busybox: (2.876421539s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- exec busybox-7b57f96db7-9ssr6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- exec busybox-7b57f96db7-t9vb7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- exec busybox-7b57f96db7-9ssr6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- exec busybox-7b57f96db7-t9vb7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- exec busybox-7b57f96db7-9ssr6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- exec busybox-7b57f96db7-t9vb7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- exec busybox-7b57f96db7-9ssr6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- exec busybox-7b57f96db7-9ssr6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- exec busybox-7b57f96db7-t9vb7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-666529 -- exec busybox-7b57f96db7-t9vb7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-666529 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-666529 -v=5 --alsologtostderr: (25.641362824s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.31s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-666529 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp testdata/cp-test.txt multinode-666529:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp multinode-666529:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1781218649/001/cp-test_multinode-666529.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp multinode-666529:/home/docker/cp-test.txt multinode-666529-m02:/home/docker/cp-test_multinode-666529_multinode-666529-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m02 "sudo cat /home/docker/cp-test_multinode-666529_multinode-666529-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp multinode-666529:/home/docker/cp-test.txt multinode-666529-m03:/home/docker/cp-test_multinode-666529_multinode-666529-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m03 "sudo cat /home/docker/cp-test_multinode-666529_multinode-666529-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp testdata/cp-test.txt multinode-666529-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp multinode-666529-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1781218649/001/cp-test_multinode-666529-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp multinode-666529-m02:/home/docker/cp-test.txt multinode-666529:/home/docker/cp-test_multinode-666529-m02_multinode-666529.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529 "sudo cat /home/docker/cp-test_multinode-666529-m02_multinode-666529.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp multinode-666529-m02:/home/docker/cp-test.txt multinode-666529-m03:/home/docker/cp-test_multinode-666529-m02_multinode-666529-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m03 "sudo cat /home/docker/cp-test_multinode-666529-m02_multinode-666529-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp testdata/cp-test.txt multinode-666529-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp multinode-666529-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1781218649/001/cp-test_multinode-666529-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp multinode-666529-m03:/home/docker/cp-test.txt multinode-666529:/home/docker/cp-test_multinode-666529-m03_multinode-666529.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529 "sudo cat /home/docker/cp-test_multinode-666529-m03_multinode-666529.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 cp multinode-666529-m03:/home/docker/cp-test.txt multinode-666529-m02:/home/docker/cp-test_multinode-666529-m03_multinode-666529-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 ssh -n multinode-666529-m02 "sudo cat /home/docker/cp-test_multinode-666529-m03_multinode-666529-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-666529 node stop m03: (1.268659757s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-666529 status: exit status 7 (496.66572ms)

                                                
                                                
-- stdout --
	multinode-666529
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-666529-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-666529-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-666529 status --alsologtostderr: exit status 7 (502.639514ms)

                                                
                                                
-- stdout --
	multinode-666529
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-666529-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-666529-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:26:39.959905  625217 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:26:39.960191  625217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:26:39.960202  625217 out.go:374] Setting ErrFile to fd 2...
	I1027 22:26:39.960209  625217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:26:39.960417  625217 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:26:39.960612  625217 out.go:368] Setting JSON to false
	I1027 22:26:39.960645  625217 mustload.go:66] Loading cluster: multinode-666529
	I1027 22:26:39.960741  625217 notify.go:221] Checking for updates...
	I1027 22:26:39.961886  625217 config.go:182] Loaded profile config "multinode-666529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:26:39.961921  625217 status.go:174] checking status of multinode-666529 ...
	I1027 22:26:39.962870  625217 cli_runner.go:164] Run: docker container inspect multinode-666529 --format={{.State.Status}}
	I1027 22:26:39.980687  625217 status.go:371] multinode-666529 host status = "Running" (err=<nil>)
	I1027 22:26:39.980733  625217 host.go:66] Checking if "multinode-666529" exists ...
	I1027 22:26:39.981057  625217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-666529
	I1027 22:26:39.997226  625217 host.go:66] Checking if "multinode-666529" exists ...
	I1027 22:26:39.997474  625217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:26:39.997526  625217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-666529
	I1027 22:26:40.014668  625217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/multinode-666529/id_rsa Username:docker}
	I1027 22:26:40.111784  625217 ssh_runner.go:195] Run: systemctl --version
	I1027 22:26:40.118082  625217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:26:40.130364  625217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:26:40.190768  625217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-27 22:26:40.179828049 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:26:40.191339  625217 kubeconfig.go:125] found "multinode-666529" server: "https://192.168.67.2:8443"
	I1027 22:26:40.191373  625217 api_server.go:166] Checking apiserver status ...
	I1027 22:26:40.191419  625217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:26:40.203574  625217 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup
	W1027 22:26:40.212466  625217 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:26:40.212530  625217 ssh_runner.go:195] Run: ls
	I1027 22:26:40.216404  625217 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1027 22:26:40.220676  625217 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1027 22:26:40.220699  625217 status.go:463] multinode-666529 apiserver status = Running (err=<nil>)
	I1027 22:26:40.220711  625217 status.go:176] multinode-666529 status: &{Name:multinode-666529 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:26:40.220725  625217 status.go:174] checking status of multinode-666529-m02 ...
	I1027 22:26:40.220978  625217 cli_runner.go:164] Run: docker container inspect multinode-666529-m02 --format={{.State.Status}}
	I1027 22:26:40.238416  625217 status.go:371] multinode-666529-m02 host status = "Running" (err=<nil>)
	I1027 22:26:40.238446  625217 host.go:66] Checking if "multinode-666529-m02" exists ...
	I1027 22:26:40.238774  625217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-666529-m02
	I1027 22:26:40.256389  625217 host.go:66] Checking if "multinode-666529-m02" exists ...
	I1027 22:26:40.256687  625217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:26:40.256726  625217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-666529-m02
	I1027 22:26:40.273826  625217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21790-482142/.minikube/machines/multinode-666529-m02/id_rsa Username:docker}
	I1027 22:26:40.371368  625217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:26:40.383829  625217 status.go:176] multinode-666529-m02 status: &{Name:multinode-666529-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:26:40.383863  625217 status.go:174] checking status of multinode-666529-m03 ...
	I1027 22:26:40.384166  625217 cli_runner.go:164] Run: docker container inspect multinode-666529-m03 --format={{.State.Status}}
	I1027 22:26:40.401244  625217 status.go:371] multinode-666529-m03 host status = "Stopped" (err=<nil>)
	I1027 22:26:40.401288  625217 status.go:384] host is not running, skipping remaining checks
	I1027 22:26:40.401298  625217 status.go:176] multinode-666529-m03 status: &{Name:multinode-666529-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 node start m03 -v=5 --alsologtostderr
E1027 22:26:45.619090  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-666529 node start m03 -v=5 --alsologtostderr: (6.778664078s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-666529
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-666529
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-666529: (31.333693412s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-666529 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-666529 --wait=true -v=5 --alsologtostderr: (48.730925359s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-666529
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-666529 node delete m03: (4.623375834s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-666529 stop: (30.085047721s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-666529 status: exit status 7 (96.966283ms)

                                                
                                                
-- stdout --
	multinode-666529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-666529-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-666529 status --alsologtostderr: exit status 7 (96.356755ms)

                                                
                                                
-- stdout --
	multinode-666529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-666529-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:28:43.577829  634988 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:28:43.578085  634988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:28:43.578095  634988 out.go:374] Setting ErrFile to fd 2...
	I1027 22:28:43.578099  634988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:28:43.578293  634988 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:28:43.578469  634988 out.go:368] Setting JSON to false
	I1027 22:28:43.578506  634988 mustload.go:66] Loading cluster: multinode-666529
	I1027 22:28:43.578656  634988 notify.go:221] Checking for updates...
	I1027 22:28:43.578975  634988 config.go:182] Loaded profile config "multinode-666529": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:28:43.578996  634988 status.go:174] checking status of multinode-666529 ...
	I1027 22:28:43.580126  634988 cli_runner.go:164] Run: docker container inspect multinode-666529 --format={{.State.Status}}
	I1027 22:28:43.598322  634988 status.go:371] multinode-666529 host status = "Stopped" (err=<nil>)
	I1027 22:28:43.598353  634988 status.go:384] host is not running, skipping remaining checks
	I1027 22:28:43.598367  634988 status.go:176] multinode-666529 status: &{Name:multinode-666529 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:28:43.598400  634988 status.go:174] checking status of multinode-666529-m02 ...
	I1027 22:28:43.598664  634988 cli_runner.go:164] Run: docker container inspect multinode-666529-m02 --format={{.State.Status}}
	I1027 22:28:43.615456  634988 status.go:371] multinode-666529-m02 host status = "Stopped" (err=<nil>)
	I1027 22:28:43.615475  634988 status.go:384] host is not running, skipping remaining checks
	I1027 22:28:43.615482  634988 status.go:176] multinode-666529-m02 status: &{Name:multinode-666529-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (25.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-666529 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-666529 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (24.90470912s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-666529 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (25.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-666529
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-666529-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-666529-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.067911ms)

                                                
                                                
-- stdout --
	* [multinode-666529-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-666529-m02' is duplicated with machine name 'multinode-666529-m02' in profile 'multinode-666529'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-666529-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-666529-m03 --driver=docker  --container-runtime=crio: (20.778133617s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-666529
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-666529: exit status 80 (291.203017ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-666529 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-666529-m03 already exists in multinode-666529-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-666529-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-666529-m03: (2.372943206s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.58s)

                                                
                                    
x
+
TestPreload (128.3s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-457242 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-457242 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (52.083801423s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-457242 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-457242 image pull gcr.io/k8s-minikube/busybox: (2.247686227s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-457242
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-457242: (5.841010407s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-457242 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1027 22:30:51.503963  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-457242 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m5.482192102s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-457242 image list
helpers_test.go:175: Cleaning up "test-preload-457242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-457242
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-457242: (2.409566856s)
--- PASS: TestPreload (128.30s)

                                                
                                    
x
+
TestScheduledStopUnix (97.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-589123 --memory=3072 --driver=docker  --container-runtime=crio
E1027 22:31:45.619257  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-589123 --memory=3072 --driver=docker  --container-runtime=crio: (21.304372942s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-589123 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-589123 -n scheduled-stop-589123
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-589123 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1027 22:32:06.938302  485668 retry.go:31] will retry after 58.883µs: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.939475  485668 retry.go:31] will retry after 194.604µs: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.940632  485668 retry.go:31] will retry after 271.481µs: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.941779  485668 retry.go:31] will retry after 428.343µs: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.942921  485668 retry.go:31] will retry after 313.135µs: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.944095  485668 retry.go:31] will retry after 633.414µs: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.945238  485668 retry.go:31] will retry after 637.42µs: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.946373  485668 retry.go:31] will retry after 1.758236ms: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.948573  485668 retry.go:31] will retry after 3.739721ms: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.952784  485668 retry.go:31] will retry after 4.317557ms: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.958007  485668 retry.go:31] will retry after 7.889358ms: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.966229  485668 retry.go:31] will retry after 12.562157ms: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.979546  485668 retry.go:31] will retry after 8.215818ms: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:06.988843  485668 retry.go:31] will retry after 21.00204ms: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:07.010129  485668 retry.go:31] will retry after 31.160519ms: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
I1027 22:32:07.042453  485668 retry.go:31] will retry after 65.262298ms: open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/scheduled-stop-589123/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-589123 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-589123 -n scheduled-stop-589123
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-589123
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-589123 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1027 22:33:08.687811  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-589123
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-589123: exit status 7 (87.218496ms)

                                                
                                                
-- stdout --
	scheduled-stop-589123
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-589123 -n scheduled-stop-589123
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-589123 -n scheduled-stop-589123: exit status 7 (89.329056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-589123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-589123
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-589123: (4.267074008s)
--- PASS: TestScheduledStopUnix (97.16s)

                                                
                                    
x
+
TestInsufficientStorage (9.72s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-960733 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-960733 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.176464428s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3f768f2d-a633-4c8e-bec3-4df169848da9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-960733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ed2742a-b8cb-44db-8bf9-474eb9edf849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21790"}}
	{"specversion":"1.0","id":"6ab27965-93ea-4b38-897f-73c4d6d3cf39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"21aad46e-7cd4-4ef5-9feb-3d91b1ce613f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig"}}
	{"specversion":"1.0","id":"f2141703-ca82-4de9-b137-44e2c57caa2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube"}}
	{"specversion":"1.0","id":"e04867f8-0b0e-49c0-afcb-dcdaa9c0f299","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d746dd32-9a48-4024-a991-30862a8c9e93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"08f00cae-3ebb-4510-a2fb-705a2994a94e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"51fed518-5141-49cb-9414-c252d071aebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"955bf773-a9c7-4da1-b96e-c98d9a6e7138","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e844a03e-f048-412b-b788-57cee0c417fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0d4a3905-6baf-461c-a9ac-afe4ad095398","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-960733\" primary control-plane node in \"insufficient-storage-960733\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f559c2a-e8e1-4ba7-a60f-26063da0d577","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cd0a589-c2c2-4e83-8cec-cfaa6f7a82b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"db126dd6-97d7-4238-aee8-66ea58e1d3a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-960733 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-960733 --output=json --layout=cluster: exit status 7 (295.683737ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-960733","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-960733","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 22:33:29.788547  655293 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-960733" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-960733 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-960733 --output=json --layout=cluster: exit status 7 (293.355231ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-960733","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-960733","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 22:33:30.083289  655403 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-960733" does not appear in /home/jenkins/minikube-integration/21790-482142/kubeconfig
	E1027 22:33:30.093770  655403 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/insufficient-storage-960733/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-960733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-960733
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-960733: (1.95840902s)
--- PASS: TestInsufficientStorage (9.72s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.511995305 start -p running-upgrade-801209 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.511995305 start -p running-upgrade-801209 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.812273849s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-801209 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1027 22:35:51.498707  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-801209 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.305208112s)
helpers_test.go:175: Cleaning up "running-upgrade-801209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-801209
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-801209: (4.786042291s)
--- PASS: TestRunningBinaryUpgrade (50.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (304.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.019748396s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-695499
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-695499: (1.918759369s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-695499 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-695499 status --format={{.Host}}: exit status 7 (94.272135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.43116211s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-695499 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (86.209066ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-695499] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-695499
	    minikube start -p kubernetes-upgrade-695499 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6954992 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-695499 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695499 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.315725091s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-695499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-695499
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-695499: (2.563303141s)
--- PASS: TestKubernetesUpgrade (304.49s)

                                                
                                    
x
+
TestMissingContainerUpgrade (84.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2813071181 start -p missing-upgrade-912550 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2813071181 start -p missing-upgrade-912550 --memory=3072 --driver=docker  --container-runtime=crio: (24.65557445s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-912550
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-912550: (10.632037086s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-912550
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-912550 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-912550 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.607982057s)
helpers_test.go:175: Cleaning up "missing-upgrade-912550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-912550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-912550: (2.516298499s)
--- PASS: TestMissingContainerUpgrade (84.34s)

                                                
                                    
x
+
TestPause/serial/Start (53.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-067652 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-067652 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (53.793865163s)
--- PASS: TestPause/serial/Start (53.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.266182230 start -p stopped-upgrade-126023 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1027 22:33:54.564388  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/addons-681393/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.266182230 start -p stopped-upgrade-126023 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.747102414s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.266182230 -p stopped-upgrade-126023 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.266182230 -p stopped-upgrade-126023 stop: (4.118781513s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-126023 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-126023 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.027266777s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-067652 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-067652 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.626906414s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-126023
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-126023: (1.180430929s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565903 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-565903 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (95.413532ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-565903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (27.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565903 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565903 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.184801193s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-565903 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (27.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-293335 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-293335 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (173.250299ms)

                                                
                                                
-- stdout --
	* [false-293335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:36:12.412399  699076 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:36:12.412512  699076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:36:12.412523  699076 out.go:374] Setting ErrFile to fd 2...
	I1027 22:36:12.412530  699076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:36:12.412751  699076 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-482142/.minikube/bin
	I1027 22:36:12.413325  699076 out.go:368] Setting JSON to false
	I1027 22:36:12.414710  699076 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8311,"bootTime":1761596261,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 22:36:12.414810  699076 start.go:143] virtualization: kvm guest
	I1027 22:36:12.416475  699076 out.go:179] * [false-293335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 22:36:12.417491  699076 notify.go:221] Checking for updates...
	I1027 22:36:12.417518  699076 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:36:12.418454  699076 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:36:12.419720  699076 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-482142/kubeconfig
	I1027 22:36:12.420703  699076 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-482142/.minikube
	I1027 22:36:12.421711  699076 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 22:36:12.422630  699076 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:36:12.424137  699076 config.go:182] Loaded profile config "NoKubernetes-565903": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:36:12.424251  699076 config.go:182] Loaded profile config "cert-expiration-219241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:36:12.424381  699076 config.go:182] Loaded profile config "kubernetes-upgrade-695499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:36:12.424524  699076 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:36:12.451339  699076 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 22:36:12.451493  699076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:36:12.515361  699076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 22:36:12.504006524 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 22:36:12.515514  699076 docker.go:318] overlay module found
	I1027 22:36:12.517132  699076 out.go:179] * Using the docker driver based on user configuration
	I1027 22:36:12.518070  699076 start.go:307] selected driver: docker
	I1027 22:36:12.518086  699076 start.go:928] validating driver "docker" against <nil>
	I1027 22:36:12.518109  699076 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:36:12.519713  699076 out.go:203] 
	W1027 22:36:12.520616  699076 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1027 22:36:12.521457  699076 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-293335 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-293335" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:35:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-219241
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:35:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-695499
contexts:
- context:
cluster: cert-expiration-219241
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:35:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-219241
name: cert-expiration-219241
- context:
cluster: kubernetes-upgrade-695499
user: kubernetes-upgrade-695499
name: kubernetes-upgrade-695499
current-context: ""
kind: Config
users:
- name: cert-expiration-219241
user:
client-certificate: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/cert-expiration-219241/client.crt
client-key: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/cert-expiration-219241/client.key
- name: kubernetes-upgrade-695499
user:
client-certificate: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/client.crt
client-key: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-293335

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293335"

                                                
                                                
----------------------- debugLogs end: false-293335 [took: 3.497731647s] --------------------------------
helpers_test.go:175: Cleaning up "false-293335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-293335
--- PASS: TestNetworkPlugins/group/false (3.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (58.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (58.503770088s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (58.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.736622363s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-565903 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-565903 status -o json: exit status 2 (335.747487ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-565903","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-565903
E1027 22:36:45.619693  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-565903: (2.024574346s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.463331457s)
--- PASS: TestNoKubernetes/serial/Start (8.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-565903 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-565903 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.40486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-565903
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-565903: (1.274611089s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565903 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565903 --driver=docker  --container-runtime=crio: (6.626784769s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-565903 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-565903 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.16599ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.986788379s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-908589 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [903d9a95-da5b-48dd-9672-2c3ef418e1a8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [903d9a95-da5b-48dd-9672-2c3ef418e1a8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003454093s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-908589 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-908589 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-908589 --alsologtostderr -v=3: (16.319848427s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908589 -n old-k8s-version-908589
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908589 -n old-k8s-version-908589: exit status 7 (78.984215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-908589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-908589 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (45.2192027s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908589 -n old-k8s-version-908589
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-188814 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9683c10c-e747-4fa9-9007-4f2974e50e4e] Pending
helpers_test.go:352: "busybox" [9683c10c-e747-4fa9-9007-4f2974e50e4e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9683c10c-e747-4fa9-9007-4f2974e50e4e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004387934s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-188814 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-188814 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-188814 --alsologtostderr -v=3: (18.52271643s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.535085239s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188814 -n no-preload-188814
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188814 -n no-preload-188814: exit status 7 (86.057335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-188814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-188814 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.958980212s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-188814 -n no-preload-188814
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-n7dg8" [350e6819-9685-4f35-baab-0b7e8df8513a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005303816s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-n7dg8" [350e6819-9685-4f35-baab-0b7e8df8513a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004005311s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-908589 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-908589 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.09393697s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-829976 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f694dbe2-ee8d-4ba0-9699-55c971369055] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f694dbe2-ee8d-4ba0-9699-55c971369055] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004884599s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-829976 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-829976 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-829976 --alsologtostderr -v=3: (16.152113427s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6rnms" [95a930ae-c927-4ee0-88ae-5ceaa45d8edc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003612683s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6rnms" [95a930ae-c927-4ee0-88ae-5ceaa45d8edc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003983791s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-188814 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-188814 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829976 -n embed-certs-829976
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829976 -n embed-certs-829976: exit status 7 (117.79714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-829976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-829976 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.790787448s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829976 -n embed-certs-829976
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (29.692190467s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-927034 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cbed7aab-1041-41f4-a104-e6676919cc97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cbed7aab-1041-41f4-a104-e6676919cc97] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003683103s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-927034 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.297094091s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-927034 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-927034 --alsologtostderr -v=3: (18.291572341s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (18.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-290425 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-290425 --alsologtostderr -v=3: (18.554953506s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (18.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034: exit status 7 (112.365605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-927034 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-927034 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.763645687s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927034 -n default-k8s-diff-port-927034
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-293335 "pgrep -a kubelet"
I1027 22:40:20.521837  485668 config.go:182] Loaded profile config "auto-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-293335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-thq7k" [58832d3d-8efd-4b8e-a2a6-761aefc88e96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-thq7k" [58832d3d-8efd-4b8e-a2a6-761aefc88e96] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00327705s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lfssc" [9b2e681b-9a25-4761-a5b6-5c3800ecbc39] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004092005s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-290425 -n newest-cni-290425
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-290425 -n newest-cni-290425: exit status 7 (99.08451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-290425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-290425 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.623786745s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-290425 -n newest-cni-290425
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lfssc" [9b2e681b-9a25-4761-a5b6-5c3800ecbc39] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004476944s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-829976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-293335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-829976 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-290425 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.97964096s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.910629278s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (56.324711464s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s2lwd" [a81bcd0c-04cb-409e-aad0-b5a2fa67a094] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005088144s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s2lwd" [a81bcd0c-04cb-409e-aad0-b5a2fa67a094] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003426079s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-927034 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-927034 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (42.105039799s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-d5ttz" [839efdd2-6ba4-40b1-93d0-059354d582bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003573015s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-293335 "pgrep -a kubelet"
I1027 22:41:31.418756  485668 config.go:182] Loaded profile config "kindnet-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-293335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8dxc4" [359bd8c8-66ae-4464-9460-d9b66808304e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8dxc4" [359bd8c8-66ae-4464-9460-d9b66808304e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00308777s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hqlrb" [e1dfc43a-6c10-4fe8-93f8-2f6207da8134] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-hqlrb" [e1dfc43a-6c10-4fe8-93f8-2f6207da8134] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004078543s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-293335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-293335 "pgrep -a kubelet"
I1027 22:41:42.202909  485668 config.go:182] Loaded profile config "calico-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-293335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-94v8b" [86c9c4b4-f4f3-4926-a114-e4d8718d107a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-94v8b" [86c9c4b4-f4f3-4926-a114-e4d8718d107a] Running
E1027 22:41:45.619087  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/functional-287960/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.0043225s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-293335 "pgrep -a kubelet"
I1027 22:41:48.257562  485668 config.go:182] Loaded profile config "custom-flannel-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-293335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vzpvm" [5f2d4077-e303-46dc-bc8c-95bb516de7e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vzpvm" [5f2d4077-e303-46dc-bc8c-95bb516de7e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003678606s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-293335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-293335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (46.877526824s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-293335 "pgrep -a kubelet"
I1027 22:42:03.121393  485668 config.go:182] Loaded profile config "enable-default-cni-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-293335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-42q9n" [7638ca98-e1ff-4c99-a031-1abba56aabc7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-42q9n" [7638ca98-e1ff-4c99-a031-1abba56aabc7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005287917s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-293335 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (37.661250041s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-293335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-sxqbg" [584d0b25-53e4-44c8-b5af-e20d69068f7f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003517655s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-293335 "pgrep -a kubelet"
I1027 22:42:50.480337  485668 config.go:182] Loaded profile config "bridge-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-293335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rg5t2" [720fa370-79d2-42ca-8b2a-4cf91b0c2ff5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rg5t2" [720fa370-79d2-42ca-8b2a-4cf91b0c2ff5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004026005s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-293335 "pgrep -a kubelet"
I1027 22:42:54.092576  485668 config.go:182] Loaded profile config "flannel-293335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-293335 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t54xr" [7984e059-d5b0-4412-a8f5-b34548b5f850] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t54xr" [7984e059-d5b0-4412-a8f5-b34548b5f850] Running
E1027 22:42:59.026976  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:42:59.033342  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:42:59.044687  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:42:59.066042  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:42:59.107384  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:42:59.188740  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:42:59.350120  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003997377s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-293335 exec deployment/netcat -- nslookup kubernetes.default
E1027 22:42:59.671640  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/no-preload-188814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1027 22:42:59.868811  485668 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/old-k8s-version-908589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-293335 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-293335 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:34: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-617659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-617659
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-293335 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-293335" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:35:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-219241
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:35:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-695499
contexts:
- context:
cluster: cert-expiration-219241
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:35:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-219241
name: cert-expiration-219241
- context:
cluster: kubernetes-upgrade-695499
user: kubernetes-upgrade-695499
name: kubernetes-upgrade-695499
current-context: ""
kind: Config
users:
- name: cert-expiration-219241
user:
client-certificate: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/cert-expiration-219241/client.crt
client-key: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/cert-expiration-219241/client.key
- name: kubernetes-upgrade-695499
user:
client-certificate: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/client.crt
client-key: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-293335

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293335"

                                                
                                                
----------------------- debugLogs end: kubenet-293335 [took: 3.58459499s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-293335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-293335
--- SKIP: TestNetworkPlugins/group/kubenet (3.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-293335 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-293335" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:35:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-219241
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-482142/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:35:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-695499
contexts:
- context:
cluster: cert-expiration-219241
extensions:
- extension:
last-update: Mon, 27 Oct 2025 22:35:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-219241
name: cert-expiration-219241
- context:
cluster: kubernetes-upgrade-695499
user: kubernetes-upgrade-695499
name: kubernetes-upgrade-695499
current-context: ""
kind: Config
users:
- name: cert-expiration-219241
user:
client-certificate: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/cert-expiration-219241/client.crt
client-key: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/cert-expiration-219241/client.key
- name: kubernetes-upgrade-695499
user:
client-certificate: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/client.crt
client-key: /home/jenkins/minikube-integration/21790-482142/.minikube/profiles/kubernetes-upgrade-695499/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-293335

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-293335" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293335"

                                                
                                                
----------------------- debugLogs end: cilium-293335 [took: 3.862121087s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-293335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-293335
--- SKIP: TestNetworkPlugins/group/cilium (4.04s)

                                                
                                    
Copied to clipboard